Displaying publications 1 - 20 of 312 in total

Abstract:
Sort:
  1. Omer ME, Mustafa M, Ali N, Abd Rahman NH
    Asian Pac J Cancer Prev, 2023 Dec 01;24(12):4167-4177.
    PMID: 38156852 DOI: 10.31557/APJCP.2023.24.12.4167
    OBJECTIVE: Cure models are frequently used in survival analysis to account for a cured fraction in the data. When there is a cure rate present, researchers often prefer cure models over parametric models to analyse the survival data. These models enable the ability to define the probability distribution of survival durations for patients who are at risk. Various distributions can be considered for the survival times, such as Exponentiated Weibull Exponential (EWE), Exponential Exponential (EE), Weibull and lognormal distribution. The objective of this research is to choose the most appropriate distribution that accurately represents the survival times of patients who have not been cured. This will be accomplished by comparing various non-mixture cure models that are based on the EWE distribution with its sub-distributions, and distributions distinct from those belonging to the EWE distribution family.

    MATERIAL AND METHODS: A sample of 85 patients diagnosed with superficial bladder tumours was selected to be used in fitting the non-mixture cure model. In order to estimate the parameters of the suggested model, which takes into account the presence of a cure rate, censored data, and covariates, we utilized the maximum likelihood estimation technique using R software version 3.5.7.

    RESULT: Upon conducting a comparison of various parametric models fitted to the data, both with and without considering the cure fraction and without incorporating any predictors, the EE distribution yields the lowest AIC, BIC, and HQIC values among all the distributions considered in this study, (1191.921/1198.502, 1201.692/1203.387, 1195.851/1200.467). Furthermore, when considering a non-mixture cure model utilizing the EE distribution along with covariates, an estimated ratio was obtained between the probabilities of being cured for placebo and thiotepa groups (and its 95% confidence intervals) were 0.76130 (0.13914, 6.81863).

    CONCLUSION: The findings of this study indicate that EE distribution is the optimal selection for determining the duration of survival in individuals diagnosed with bladder cancer.

    Matched MeSH terms: Models, Statistical*
  2. Sim KS, Cheng Z, Chuah HT
    Scanning, 2004 12 23;26(6):287-95.
    PMID: 15612206
    A new technique based on the statistical autoregressive (AR) model has recently been developed as a solution to signal-to-noise (SNR) estimation in scanning electron microscope (SEM) images. In the present study, we propose to cascade the Lagrange time delay (LTD) estimator with the AR model. We call this technique the mixed Lagrange time delay estimation autoregressive (MLTDEAR) model. In a few test cases involving different images, this model is found to present an optimum solution for SNR estimation problems under different noise environments. In addition, it requires only a small filter order and has no noticeable estimation bias. The performance of the proposed estimator is compared with three existing methods: simple method, first-order linear interpolator, and AR-based estimator over several images. The efficiency of the MLTDEAR estimator, being more robust with noise, is significantly greater than that of the other three methods.
    Matched MeSH terms: Models, Statistical
  3. Al-Kharasani NM, Zulkarnain ZA, Subramaniam S, Hanapi ZM
    Sensors (Basel), 2018 Feb 15;18(2).
    PMID: 29462884 DOI: 10.3390/s18020597
    Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED).
    Matched MeSH terms: Models, Statistical
  4. Ghanim F, Darus M
    ScientificWorldJournal, 2013;2013:475643.
    PMID: 24396297 DOI: 10.1155/2013/475643
    By using a linear operator, we obtain some new results for a normalized analytic function f defined by means of the Hadamard product of Hurwitz zeta function. A class related to this function will be introduced and the properties will be discussed.
    Matched MeSH terms: Models, Statistical*
  5. Liang SN, Borondo F, Lan BL
    PLoS One, 2012;7(11):e48447.
    PMID: 23152774 DOI: 10.1371/journal.pone.0048447
    The statistical predictions of Newtonian and special-relativistic mechanics, which are calculated from an initially Gaussian ensemble of trajectories, are compared for a low-speed scattering system. The comparisons are focused on the mean dwell time, transmission and reflection coefficients, and the position and momentum means and standard deviations. We find that the statistical predictions of the two theories do not always agree as conventionally expected. The predictions are close if the scattering is non-chaotic but they are radically different if the scattering is chaotic and the initial ensemble is well localized in phase space. Our result indicates that for low-speed chaotic scattering, special-relativistic mechanics must be used, instead of the standard practice of using Newtonian mechanics, to obtain empirically-correct statistical predictions from an initially well-localized Gaussian ensemble.
    Matched MeSH terms: Models, Statistical*
  6. Goh J, Hj M Ali N
    PLoS One, 2015;10(7):e0132782.
    PMID: 26182211 DOI: 10.1371/journal.pone.0132782
    Over the last few decades, cubic splines have been widely used to approximate differential equations due to their ability to produce highly accurate solutions. In this paper, the numerical solution of a two-dimensional elliptic partial differential equation is treated by a specific cubic spline approximation in the x-direction and finite difference in the y-direction. A four point explicit group (EG) iterative scheme with an acceleration tool is then applied to the obtained system. The formulation and implementation of the method for solving physical problems are presented in detail. The complexity of computational is also discussed and the comparative results are tabulated to illustrate the efficiency of the proposed method.
    Matched MeSH terms: Models, Statistical*
  7. Zalina MD, Desa MN, Nguyen VT, Kassim AH
    Water Sci Technol, 2002;45(2):63-8.
    PMID: 11890166
    This paper discusses the comparative assessment of eight candidate distributions in providing accurate and reliable maximum rainfall estimates for Malaysia. The models considered were the Gamma, Generalised Normal, Generalised Pareto, Generalised Extreme Value, Gumbel, Log Pearson Type III, Pearson Type III and Wakeby. Annual maximum rainfall series for one-hour resolution from a network of seventeen automatic gauging stations located throughout Peninsular Malaysia were selected for this study. The length of rainfall records varies from twenty-three to twenty-eight years. Model parameters were estimated using the L-moment method. The quantitative assessment of the descriptive ability of each model was based on the Probability Plot Correlation Coefficient test combined with root mean squared error, relative root mean squared error and maximum absolute deviation. Bootstrap resampling was employed to investigate the extrapolative ability of each distribution. On the basis of these comparisons, it can be concluded that the GEV distribution is the most appropriate distribution for describing the annual maximum rainfall series in Malaysia.
    Matched MeSH terms: Models, Statistical*
  8. Butt UM, Letchmunan S, Hassan FH, Koh TW
    PLoS One, 2022;17(9):e0274172.
    PMID: 36070317 DOI: 10.1371/journal.pone.0274172
    The continued urbanization poses several challenges for law enforcement agencies to ensure a safe and secure environment. Countries are spending a substantial amount of their budgets to control and prevent crime. However, limited efforts have been made in the crime prediction area due to the deficiency of spatiotemporal crime data. Several machine learning, deep learning, and time series analysis techniques are exploited, but accuracy issues prevail. Thus, this study proposed a Bidirectional Long Short Term Memory (Bi-LSTM) and Exponential Smoothing (ES) hybrid for crime forecasting. The proposed technique is evaluated using New York City crime data from 2010-2017. The proposed approach outperformed as compared to state-of-the-art Seasonal Autoregressive Integrated Moving Averages (SARIMA) with low Mean Absolute Percentage Error (MAPE) (0.3738, 0.3891, 0.3433,0.3964), Root Mean Square Error (RMSE)(13.146, 13.669, 13.104, 13.77), and Mean Absolute Error (MAE) (9.837, 10.896, 10.598, 10.721). Therefore, the proposed technique can help law enforcement agencies to prevent and control crime by forecasting crime patterns.
    Matched MeSH terms: Models, Statistical*
  9. Seyed Ehsan Saffari, Robiah Adnan
    Sains Malaysiana, 2012;41:1483-1487.
    A Poisson model typically is assumed for count data, but when there are so many zeroes in the response variable, because of overdispersion, a negative binomial regression is suggested as a count regression instead of Poisson regression. In this paper, a zero-inflated negative binomial regression model with right truncation count data was developed. In this model, we considered a response variable and one or more than one explanatory variables. The estimation of regression
    parameters using the maximum likelihood method was discussed and the goodness-of-fit for the regression model was examined. We studied the effects of truncation in terms of parameters estimation, their standard errors and the goodnessof-fit statistics via real data. The results showed a better fit by using a truncated zero-inflated negative binomial regression model when the response variable has many zeros and it was right truncated.
    Matched MeSH terms: Models, Statistical
  10. Hazarika PJ, Chakraborty S
    Sains Malaysiana, 2014;43:1801-1809.
    Hidden truncation (HT) and additive component (AC) are two well known paradigms of generating skewed distributions from known symmetric distribution. In case of normal distribution it has been known that both the above paradigms lead to Azzalini's (1985) skew normal distribution. While the HT directly gives the Azzalini's ( 1985) skew normal distribution, the one generated by AC also leads to the same distribution under a re parameterization proposed by Arnold and Gomez (2009). But no such re parameterization which leads to exactly the same distribution by these two paradigms has so far been suggested for the skewed distributions generated from symmetric logistic and Laplace distributions. In this article, an attempt has been made to investigate numerically as well as statistically the closeness of skew distributions generated by HT and AC methods under the same re parameterization of Arnold and Gomez (2009) in the case of logistic and Laplace distributions.
    Matched MeSH terms: Models, Statistical
  11. Ser G, Keskin S, Can Yilmaz M
    Sains Malaysiana, 2016;45:1755-1761.
    Multiple imputation method is a widely used method in missing data analysis. The method consists of a three-stage
    process including imputation, analyzing and pooling. The number of imputations to be selected in the imputation step
    in the first stage is important. Hence, this study aimed to examine the performance of multiple imputation method at
    different numbers of imputations. Monotone missing data pattern was created in the study by deleting approximately 24%
    of the observations from the continuous result variable with complete data. At the first stage of the multiple imputation
    method, monotone regression imputation at different numbers of imputations (m=3, 5, 10 and 50) was performed. In the
    second stage, parameter estimations and their standard errors were obtained by applying general linear model to each
    of the complete data sets obtained. In the final stage, the obtained results were pooled and the effect of the numbers of
    imputations on parameter estimations and their standard errors were evaluated on the basis of these results. In conclusion,
    efficiency of parameter estimations at the number of imputation m=50 was determined as about 99%. Hence, at the
    determined missing observation rate, increase was determined in efficiency and performance of the multiple imputation
    method as the number of imputations increased.
    Matched MeSH terms: Models, Statistical
  12. Kheirollahpour M, Shohaimi S
    ScientificWorldJournal, 2014;2014:512148.
    PMID: 25097878 DOI: 10.1155/2014/512148
    The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI) of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA) was applied to reveal the hidden (secondary) effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model.
    Matched MeSH terms: Models, Statistical*
  13. Azamathulla HM, Zakaria NA
    Water Sci Technol, 2011;63(10):2225-30.
    PMID: 21977642
    The process involved in the local scour below pipelines is so complex that it makes it difficult to establish a general empirical model to provide accurate estimation for scour. This paper describes the use of artificial neural networks (ANN) to estimate the pipeline scour depth. The data sets of laboratory measurements were collected from published works and used to train the network or evolve the program. The developed networks were validated by using the observations that were not involved in training. The performance of ANN was found to be more effective when compared with the results of regression equations in predicting the scour depth around pipelines.
    Matched MeSH terms: Models, Statistical*
  14. Mohammed N, Palaniandy P, Shaik F, Mewada H, Balakrishnan D
    Chemosphere, 2023 Feb;314:137665.
    PMID: 36581118 DOI: 10.1016/j.chemosphere.2022.137665
    In this approach, a batch reactor was employed to study the degradation of pollutants under natural sunlight using TiO2 as a photocatalyst. The effects of photocatalyst dosage, reaction time and pH were investigated by evaluating the percentage removal efficiencies of total organic carbon (TOC), chemical oxygen demand (COD), biological oxygen demand (BOD) and biodegradability (BOD/COD). Design Expert-Response Surface Methodology Box Behnken Design (BBD) and MATLAB Artificial Neural Network - Adaptive Neuro Fuzzy Inference system (ANN-ANFIS) methods were employed to perform the statistical modelling. The experimental values of maximum percentage removal efficiencies were found to be TOC = 82.4, COD = 85.9, BOD = 30.9% and biodegradability was 0.070. According to RSM-BBD and ANFIS analysis, the maximum percentage removal efficiencies were found to be TOC = 90.3, 82.4; COD = 85.4, 85.9; BOD = 28.9, 30.9% and the biodegradability = 0.074, 0.080 respectively at the pH 7.5, reaction time 300 min and photocatalyst dosage of 4 g L-1. The study reveals both models found to be well predicted as compared with experimental values. The values of R2 for RSM-BBD (0.920) and for ANFIS (0.990) models were almost close to 1. The ANFIS model was found to be marginally better than that of RSM-BBD.
    Matched MeSH terms: Models, Statistical*
  15. Ng DC, Liew CH, Tan KK, Chin L, Ting GSS, Fadzilah NF, et al.
    BMC Infect Dis, 2023 Jun 12;23(1):398.
    PMID: 37308825 DOI: 10.1186/s12879-023-08357-y
    BACKGROUND: Children account for a significant proportion of COVID-19 hospitalizations, but data on the predictors of disease severity in children are limited. We aimed to identify risk factors associated with moderate/severe COVID-19 and develop a nomogram for predicting children with moderate/severe COVID-19.

    METHODS: We identified children ≤ 12 years old hospitalized for COVID-19 across five hospitals in Negeri Sembilan, Malaysia, from 1 January 2021 to 31 December 2021 from the state's pediatric COVID-19 case registration system. The primary outcome was the development of moderate/severe COVID-19 during hospitalization. Multivariate logistic regression was performed to identify independent risk factors for moderate/severe COVID-19. A nomogram was constructed to predict moderate/severe disease. The model performance was evaluated using the area under the curve (AUC), sensitivity, specificity, and accuracy.

    RESULTS: A total of 1,717 patients were included. After excluding the asymptomatic cases, 1,234 patients (1,023 mild cases and 211 moderate/severe cases) were used to develop the prediction model. Nine independent risk factors were identified, including the presence of at least one comorbidity, shortness of breath, vomiting, diarrhea, rash, seizures, temperature on arrival, chest recessions, and abnormal breath sounds. The nomogram's sensitivity, specificity, accuracy, and AUC for predicting moderate/severe COVID-19 were 58·1%, 80·5%, 76·8%, and 0·86 (95% CI, 0·79 - 0·92) respectively.

    CONCLUSION: Our nomogram, which incorporated readily available clinical parameters, would be useful to facilitate individualized clinical decisions.

    Matched MeSH terms: Models, Statistical*
  16. Safaei MR, Mahian O, Garoosi F, Hooman K, Karimipour A, Kazi SN, et al.
    ScientificWorldJournal, 2014;2014:740578.
    PMID: 25379542 DOI: 10.1155/2014/740578
    This paper addresses erosion prediction in 3-D, 90° elbow for two-phase (solid and liquid) turbulent flow with low volume fraction of copper. For a range of particle sizes from 10 nm to 100 microns and particle volume fractions from 0.00 to 0.04, the simulations were performed for the velocity range of 5-20 m/s. The 3-D governing differential equations were discretized using finite volume method. The influences of size and concentration of micro- and nanoparticles, shear forces, and turbulence on erosion behavior of fluid flow were studied. The model predictions are compared with the earlier studies and a good agreement is found. The results indicate that the erosion rate is directly dependent on particles' size and volume fraction as well as flow velocity. It has been observed that the maximum pressure has direct relationship with the particle volume fraction and velocity but has a reverse relationship with the particle diameter. It also has been noted that there is a threshold velocity as well as a threshold particle size, beyond which significant erosion effects kick in. The average friction factor is independent of the particle size and volume fraction at a given fluid velocity but increases with the increase of inlet velocities.
    Matched MeSH terms: Models, Statistical*
  17. Falatoonitoosi E, Ahmed S, Sorooshian S
    ScientificWorldJournal, 2014;2014:103846.
    PMID: 24693224 DOI: 10.1155/2014/103846
    Decision-Making Trial and Evaluation Laboratory (DEMATEL) methodology has been proposed to solve complex and intertwined problem groups in many situations such as developing the capabilities, complex group decision making, security problems, marketing approaches, global managers, and control systems. DEMATEL is able to realize casual relationships by dividing important issues into cause and effect group as well as making it possible to visualize the casual relationships of subcriteria and systems in the course of casual diagram that it may demonstrate communication network or a little control relationships between individuals. Despite of its ability to visualize cause and effect inside a network, the original DEMATEL has not been able to find the cause and effect group between different networks. Therefore, the aim of this study is proposing the expanded DEMATEL to cover this deficiency by new formulations to determine cause and effect factors between separate networks that have bidirectional direct impact on each other. At the end, the feasibility of new extra formulations is validated by case study in three numerical examples of green supply chain networks for an automotive company.
    Matched MeSH terms: Models, Statistical*
  18. Biglari V, Alfan EB, Ahmad RB, Hajian N
    PLoS One, 2013;8(10):e73853.
    PMID: 24146741 DOI: 10.1371/journal.pone.0073853
    Previous researches show that buy (growth) companies conduct income increasing earnings management in order to meet forecasts and generate positive forecast Errors (FEs). This behavior however, is not inherent in sell (non-growth) companies. Using the aforementioned background, this research hypothesizes that since sell companies are pressured to avoid income increasing earnings management, they are capable, and in fact more inclined, to pursue income decreasing Forecast Management (FM) with the purpose of generating positive FEs. Using a sample of 6553 firm-years of companies that are listed in the NYSE between the years 2005-2010, the study determines that sell companies conduct income decreasing FM to generate positive FEs. However, the frequency of positive FEs of sell companies does not exceed that of buy companies. Using the efficiency perspective, the study suggests that even though buy and sell companies have immense motivation in avoiding negative FEs, they exploit different but efficient strategies, respectively, in order to meet forecasts. Furthermore, the findings illuminated the complexities behind informative and opportunistic forecasts that falls under the efficiency versus opportunistic theories in literature.
    Matched MeSH terms: Models, Statistical*
  19. Tay BA
    PMID: 23767497
    We study the reduced dynamics of a pair of nondegenerate oscillators coupled collectively to a thermal bath. The model is related to the trilinear boson model where the idler mode is promoted to a field. Due to nonlinear coupling, the Markovian master equation for the pair of oscillators admits non-Gaussian equilibrium states, where the modes distribute according to the Bose-Einstein statistics. These states are metastable before the nonlinear coupling is taken over by linear coupling between the individual oscillators and the field. The Gibbs state for the individual modes lies in the subspace with infinite occupation quantum number. We present the time evolution of a few states to illustrate the behaviors of the system.
    Matched MeSH terms: Models, Statistical*
  20. Khalid R, Nawawi MK, Kawsar LA, Ghani NA, Kamil AA, Mustafa A
    PLoS One, 2013;8(4):e58402.
    PMID: 23560037 DOI: 10.1371/journal.pone.0058402
    M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.
    Matched MeSH terms: Models, Statistical*
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links