Displaying publications 1 - 20 of 135 in total

Abstract:
Sort:
  1. Beenish H, Javid T, Fahad M, Siddiqui AA, Ahmed G, Syed HJ
    Sensors (Basel), 2023 Jan 09;23(2).
    PMID: 36679565 DOI: 10.3390/s23020768
    An intelligent transportation system (ITS) aims to improve traffic efficiency by integrating innovative sensing, control, and communications technologies. The industrial Internet of things (IIoT) and Industrial Revolution 4.0 recently merged to design the industrial Internet of things-intelligent transportation system (IIoT-ITS). IIoT sensing technologies play a significant role in acquiring raw data. The application continuously performs the complex task of managing traffic flows effectively based on several parameters, including the number of vehicles in the system, their location, and time. Traffic density estimation (TDE) is another important derived parameter desirable to keep track of the dynamic state of traffic volume. The expanding number of vehicles based on wireless connectivity provides new potential to predict traffic density more accurately and in real time as previously used methodologies. We explore the topic of assessing traffic density by using only a few simple metrics, such as the number of surrounding vehicles and disseminating beacons to roadside units and vice versa. This research paper investigates TDE techniques and presents a novel Markov model-based TDE technique for ITS. Finally, an OMNET++-based approach with an implementation of a significant modification of a traffic model combined with mathematical modeling of the Markov model is presented. It is intended for the study of real-world traffic traces, the identification of model parameters, and the development of simulated traffic.
    Matched MeSH terms: Benchmarking*
  2. Alnoor A, Chew X, Khaw KW, Muhsen YR, Sadaa AM
    Environ Sci Pollut Res Int, 2024 Jan;31(4):5762-5783.
    PMID: 38133762 DOI: 10.1007/s11356-023-31645-8
    Greenhouse gas emissions and global warming are recent issues of upward trend. This study sought to underline the causal relationships between engagement modes with green technology, environmental, social, and governance (ESG) ratio, and circular economy. Our investigation also captured benchmarking of energy companies' circular economy behaviors. A hybrid-stage partial least squares structural equation modeling (PLS-SEM) and multi-criteria decision-making (MCDM) analysis have been adopted. This study collected 713 questionnaires from heads of departments and managers of energy companies. The findings of this study claimed that engagement modes with green technology affect the circular economy and sustainability. The findings revealed that ESG ratings have a mediating role in the nexus among engagement modes with green technology and circular economy. The results of the MCDM application revealed the identification of the best and worst energy companies of circular economy behaviours. This study is exceptional because it is among the first to address the issues of greenhouse gas emissions by providing decisive evidence about the level of circular economy behaviors in energy companies.
    Matched MeSH terms: Benchmarking*
  3. Hou Chin J, Ratnavelu K
    Sci Rep, 2017 04 04;7:45836.
    PMID: 28374836 DOI: 10.1038/srep45836
    Community structure is an important feature of a complex network, where detection of the community structure can shed some light on the properties of such a complex network. Amongst the proposed community detection methods, the label propagation algorithm (LPA) emerges as an effective detection method due to its time efficiency. Despite this advantage in computational time, the performance of LPA is affected by randomness in the algorithm. A modified LPA, called CLPA-GNR, was proposed recently and it succeeded in handling the randomness issues in the LPA. However, it did not remove the tendency for trivial detection in networks with a weak community structure. In this paper, an improved CLPA-GNR is therefore proposed. In the new algorithm, the unassigned and assigned nodes are updated synchronously while the assigned nodes are updated asynchronously. A similarity score, based on the Sørensen-Dice index, is implemented to detect the initial communities and for breaking ties during the propagation process. Constraints are utilised during the label propagation and community merging processes. The performance of the proposed algorithm is evaluated on various benchmark and real-world networks. We find that it is able to avoid trivial detection while showing substantial improvement in the quality of detection.
    Matched MeSH terms: Benchmarking
  4. Wu Diyi, Zulaiha Ali Othman, Suhaila Zainudin, Ayman Srour
    MyJurnal
    The water flow-like algorithm (WFA) is a relatively new metaheuristic algorithm, which has shown good solution for the Travelling Salesman Problem (TSP) and is comparable to state of the art results. The basic WFA for TSP uses a 2-opt searching method to decide a water flow splitting decision. Previous algorithms, such as the Ant Colony System for the TSP, has shown that using k-opt (k>2) improves the solution, but increases its complexity exponentially. Therefore, this paper aims to present the performance of the WFA-TSP using 3-opt and 4-opt, respectively, compare them with the basic WFA-TSP using 2-opt and the state of the art algorithms. The algorithms are evaluated using 16 benchmarks TSP datasets. The experimental results show that the proposed WFA-TSP-4opt outperforms in solution quality compare with others, due to its capacity of more exploration and less convergence.
    Matched MeSH terms: Benchmarking
  5. Wang S, Liu Q, Liu Y, Jia H, Abualigah L, Zheng R, et al.
    Comput Intell Neurosci, 2021;2021:6379469.
    PMID: 34531910 DOI: 10.1155/2021/6379469
    Based on Salp Swarm Algorithm (SSA) and Slime Mould Algorithm (SMA), a novel hybrid optimization algorithm, named Hybrid Slime Mould Salp Swarm Algorithm (HSMSSA), is proposed to solve constrained engineering problems. SSA can obtain good results in solving some optimization problems. However, it is easy to suffer from local minima and lower density of population. SMA specializes in global exploration and good robustness, but its convergence rate is too slow to find satisfactory solutions efficiently. Thus, in this paper, considering the characteristics and advantages of both the above optimization algorithms, SMA is integrated into the leader position updating equations of SSA, which can share helpful information so that the proposed algorithm can utilize these two algorithms' advantages to enhance global optimization performance. Furthermore, Levy flight is utilized to enhance the exploration ability. It is worth noting that a novel strategy called mutation opposition-based learning is proposed to enhance the performance of the hybrid optimization algorithm on premature convergence avoidance, balance between exploration and exploitation phases, and finding satisfactory global optimum. To evaluate the efficiency of the proposed algorithm, HSMSSA is applied to 23 different benchmark functions of the unimodal and multimodal types. Additionally, five classical constrained engineering problems are utilized to evaluate the proposed technique's practicable abilities. The simulation results show that the HSMSSA method is more competitive and presents more engineering effectiveness for real-world constrained problems than SMA, SSA, and other comparative algorithms. In the end, we also provide some potential areas for future studies such as feature selection and multilevel threshold image segmentation.
    Matched MeSH terms: Benchmarking*
  6. Balla A, Habaebi MH, Elsheikh EAA, Islam MR, Suliman FM
    Sensors (Basel), 2023 Jan 09;23(2).
    PMID: 36679553 DOI: 10.3390/s23020758
    Integrating IoT devices in SCADA systems has provided efficient and improved data collection and transmission technologies. This enhancement comes with significant security challenges, exposing traditionally isolated systems to the public internet. Effective and highly reliable security devices, such as intrusion detection system (IDSs) and intrusion prevention systems (IPS), are critical. Countless studies used deep learning algorithms to design an efficient IDS; however, the fundamental issue of imbalanced datasets was not fully addressed. In our research, we examined the impact of data imbalance on developing an effective SCADA-based IDS. To investigate the impact of various data balancing techniques, we chose two unbalanced datasets, the Morris power dataset, and CICIDS2017 dataset, including random sampling, one-sided selection (OSS), near-miss, SMOTE, and ADASYN. For binary classification, convolutional neural networks were coupled with long short-term memory (CNN-LSTM). The system's effectiveness was determined by the confusion matrix, which includes evaluation metrics, such as accuracy, precision, detection rate, and F1-score. Four experiments on the two datasets demonstrate the impact of the data imbalance. This research aims to help security researchers in understanding imbalanced datasets and their impact on DL SCADA-IDS.
    Matched MeSH terms: Benchmarking*
  7. Fan PY, Chun KP, Tan ML, Mah DN, Mijic A, Strickert G, et al.
    PLoS One, 2023;18(9):e0289780.
    PMID: 37682889 DOI: 10.1371/journal.pone.0289780
    The importance of easy wayfinding in complex urban settings has been recognized in spatial planning. Empirical measurement and explicit representation of wayfinding, however, have been limited in deciding spatial configurations. Our study proposed and tested an approach to improving wayfinding by incorporating spatial analysis of urban forms in the Guangdong-Hong Kong-Macau Great Bay Area in China. Wayfinding was measured by an indicator of intelligibility using spatial design network analysis. Urban spatial configurations were quantified using landscape metrics to describe the spatial layouts of local climate zones (LCZs) as standardized urban forms. The statistical analysis demonstrated the significant associations between urban spatial configurations and wayfinding. These findings suggested, to improve wayfinding, 1) dispersing LCZ 1 (compact high-rise) and LCZ 2 (compact mid-rise) and 2) agglomerating LCZ 3 (compact low-rise), LCZ 5 (open mid-rise), LCZ 6 (open low-rise), and LCZ 9 (sparsely built). To our knowledge, this study is the first to incorporate the LCZ classification system into the wayfinding field, clearly providing empirically-supported solutions for dispersing and agglomerating spatial configurations. Our findings also provide insights for human-centered spatial planning by spatial co-development at local, urban, and regional levels.
    Matched MeSH terms: Benchmarking*
  8. Khan HHA, Ahmad N, Yusof NM, Chowdhury MAM
    Environ Sci Pollut Res Int, 2024 Feb;31(6):9784-9794.
    PMID: 38194178 DOI: 10.1007/s11356-023-31809-6
    This study critically examines the dynamic interplay between green finance and environmental sustainability using a systematic review and bibliometric analysis. The analysis is centered on 507 scholarly articles published between 2013 and 2023 in the Scopus database and leverages Microsoft Excel, Harzing Publish or Perish, and VOSviewer to identify publication trends, key contributors, research impact, and emergent themes in this rapidly evolving field. The findings reveal that research on green finance and environmental sustainability has increased exponentially over the past decade, with China and institutions in Asia emerging as prominent contributors compared to other regions. This study also identified the Environmental Science and Pollution Research journal as the most active source title, demonstrating its commitment to publishing current findings on the topic. Through keyword analysis, several research avenues have been proposed to guide future research on enhancing the strategic role of green finance in promoting environmental sustainability. These avenues include broadening the geographical scope of research, exploring the synergies between green finance and emerging fintech innovations, developing robust metrics to quantify the socioeconomic impacts of green finance, establishing a risk and resilience framework to protect green finance against uncertainties, and creating a Green Finance Performance Index to evaluate the dual returns of environmental and financial performance.
    Matched MeSH terms: Benchmarking*
  9. Bahashwan AA, Anbar M, Manickam S, Issa G, Aladaileh MA, Alabsi BA, et al.
    PLoS One, 2024;19(2):e0297548.
    PMID: 38330004 DOI: 10.1371/journal.pone.0297548
    Software Defined Network (SDN) has alleviated traditional network limitations but faces a significant challenge due to the risk of Distributed Denial of Service (DDoS) attacks against an SDN controller, with current detection methods lacking evaluation on unrealistic SDN datasets and standard DDoS attacks (i.e., high-rate DDoS attack). Therefore, a realistic dataset called HLD-DDoSDN is introduced, encompassing prevalent DDoS attacks specifically aimed at an SDN controller, such as User Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), and User Datagram Protocol (UDP). This SDN dataset also incorporates diverse levels of traffic fluctuations, representing different traffic variation rates (i.e., high and low rates) in DDoS attacks. It is qualitatively compared to existing SDN datasets and quantitatively evaluated across all eight scenarios to ensure its superiority. Furthermore, it fulfils the requirements of a benchmark dataset in terms of size, variety of attacks and scenarios, with significant features that highly contribute to detecting realistic SDN attacks. The features of HLD-DDoSDN are evaluated using a Deep Multilayer Perception (D-MLP) based detection approach. Experimental findings indicate that the employed features exhibit high performance in the detection accuracy, recall, and precision of detecting high and low-rate DDoS flooding attacks.
    Matched MeSH terms: Benchmarking*
  10. Al-Dabbagh MM, Salim N, Himmat M, Ahmed A, Saeed F
    Molecules, 2015;20(10):18107-27.
    PMID: 26445039 DOI: 10.3390/molecules201018107
    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.
    Matched MeSH terms: Benchmarking
  11. Niamul Islam N, Hannan MA, Mohamed A, Shareef H
    PLoS One, 2016;11(1):e0146277.
    PMID: 26745265 DOI: 10.1371/journal.pone.0146277
    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation.
    Matched MeSH terms: Benchmarking
  12. Adam MS, Por LY, Hussain MR, Khan N, Ang TF, Anisi MH, et al.
    Sensors (Basel), 2019 Aug 29;19(17).
    PMID: 31470520 DOI: 10.3390/s19173732
    Many receiver-based Preamble Sampling Medium Access Control (PS-MAC) protocols have been proposed to provide better performance for variable traffic in a wireless sensor network (WSN). However, most of these protocols cannot prevent the occurrence of incorrect traffic convergence that causes the receiver node to wake-up more frequently than the transmitter node. In this research, a new protocol is proposed to prevent the problem mentioned above. The proposed mechanism has four components, and they are Initial control frame message, traffic estimation function, control frame message, and adaptive function. The initial control frame message is used to initiate the message transmission by the receiver node. The traffic estimation function is proposed to reduce the wake-up frequency of the receiver node by using the proposed traffic status register (TSR), idle listening times (ILTn, ILTk), and "number of wake-up without receiving beacon message" (NWwbm). The control frame message aims to supply the essential information to the receiver node to get the next wake-up-interval (WUI) time for the transmitter node using the proposed adaptive function. The proposed adaptive function is used by the receiver node to calculate the next WUI time of each of the transmitter nodes. Several simulations are conducted based on the benchmark protocols. The outcome of the simulation indicates that the proposed mechanism can prevent the incorrect traffic convergence problem that causes frequent wake-up of the receiver node compared to the transmitter node. Moreover, the simulation results also indicate that the proposed mechanism could reduce energy consumption, produce minor latency, improve the throughput, and produce higher packet delivery ratio compared to other related works.
    Matched MeSH terms: Benchmarking
  13. ASSUNTA MALAR PATRICK VINCENT, HASSILAH SALLEH
    MyJurnal
    A wide range of studies have been conducted on deep learning to forecast time series data. However, very few researches have discussed the optimal number of hidden layers and nodes in each hidden layer of the architecture. It is crucial to study the number of hidden layers and nodes in each hidden layer as it controls the performance of the architecture. Apart from that, in the presence of the activation function, diverse computation between the hidden layers and output layer can take place. Therefore, in this study, the multilayer perceptron (MLP) architecture is developed using the Python software to forecast time series data. Then, the developed architecture is applied on the Apple Inc. stock price due to its volatile characteristic. Using historical prices, the accuracy of the forecast is measured by the different activation functions, number of hidden layers and size of data. The Keras deep learning library, which can be found in the Python software, is used to develop the MLP architecture to forecast the Apple Inc. stock price. The developed model is then applied on different cases, namely different sizes of data, different activation functions, different numbers of hidden layers of up to nine layers, and different numbers of nodes in each hidden layer. Then, the metrics mean squared error (MSE), mean absolute error (MAE) and root-mean-square error (RMSE) are employed to test the accuracy of the forecast. It is found that the architecture with rectified linear unit (ReLU) outperformed in every hidden layer and each case with the highest accuracy. To conclude, the optimal number of hidden layers differs in every case as there are other influencing factors.
    Matched MeSH terms: Benchmarking
  14. Al-Bashiri H, Abdulgabber MA, Romli A, Kahtan H
    PLoS One, 2018;13(10):e0204434.
    PMID: 30286123 DOI: 10.1371/journal.pone.0204434
    This paper describes an approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method. Recommender systems are used to filter the huge amount of data available online based on user-defined preferences. Collaborative filtering (CF) is a commonly used recommendation approach that generates recommendations based on correlations among user preferences. Although several enhancements have increased the accuracy of memory-based CF through the development of improved similarity measures for finding successful neighbors, there has been less investigation into prediction score methods, in which rating/preference scores are assigned to items that have not yet been selected by a user. A TOPSIS solution for evaluating multiple alternatives based on more than one criterion is proposed as an alternative to prediction score methods for evaluating and ranking items based on the results from similar users. The recommendation accuracy of the proposed TOPSIS technique is evaluated by applying it to various common CF baseline methods, which are then used to analyze the MovieLens 100K and 1M benchmark datasets. The results show that CF based on the TOPSIS method is more accurate than baseline CF methods across a number of common evaluation metrics.
    Matched MeSH terms: Benchmarking
  15. Gharaei N, Abu Bakar K, Mohd Hashim SZ, Hosseingholi Pourasl A, Siraj M, Darwish T
    Sensors (Basel), 2017 Aug 11;17(8).
    PMID: 28800121 DOI: 10.3390/s17081858
    Network lifetime and energy efficiency are crucial performance metrics used to evaluate wireless sensor networks (WSNs). Decreasing and balancing the energy consumption of nodes can be employed to increase network lifetime. In cluster-based WSNs, one objective of applying clustering is to decrease the energy consumption of the network. In fact, the clustering technique will be considered effective if the energy consumed by sensor nodes decreases after applying clustering, however, this aim will not be achieved if the cluster size is not properly chosen. Therefore, in this paper, the energy consumption of nodes, before clustering, is considered to determine the optimal cluster size. A two-stage Genetic Algorithm (GA) is employed to determine the optimal interval of cluster size and derive the exact value from the interval. Furthermore, the energy hole is an inherent problem which leads to a remarkable decrease in the network's lifespan. This problem stems from the asynchronous energy depletion of nodes located in different layers of the network. For this reason, we propose Circular Motion of Mobile-Sink with Varied Velocity Algorithm (CM2SV2) to balance the energy consumption ratio of cluster heads (CH). According to the results, these strategies could largely increase the network's lifetime by decreasing the energy consumption of sensors and balancing the energy consumption among CHs.
    Matched MeSH terms: Benchmarking
  16. Yew Y, Arcos González P, Castro Delgado R
    Prehosp Disaster Med, 2020 Feb;35(1):76-82.
    PMID: 31928556 DOI: 10.1017/S1049023X19005247
    INTRODUCTION: The Richter Scale measures the magnitude of the seismic activity for an earthquake; however, it does not quantify the humanitarian need at the point of impact. This poses a challenge for humanitarian stakeholders in decision and policy making, especially in risk reduction, response, recovery, and reconstruction. The new disaster metrics tool titled "The YEW Disaster Severity Index" (DSI) was developed and presented at the 2017 World Congress of Disaster and Emergency Medicine, May 2017, Toronto, Canada. It uses a median score of three for vulnerability and exposure indicators, a median score percentage of 100%, and medium YEW DSI scoring of four to five as baseline, indicating the ability to cope within local capacity. Therefore, scoring more than baseline coping capacity indicates that external assistance is needed. This special real-time report was presented at the 2nd National Pre-Hospital Care Conference and Championship, October 2018, Malaysia.

    REPORT: The aim of this analysis is to present the real-time humanitarian impact and response to the 2018 earthquake and tsunami at Donggala and Palu, Sulawesi in Indonesia using the new disaster metrics YEW DSI. Based on the earthquake (measuring 7.7 on the Richter Scale) and tsunami at Donggala, the humanitarian impact calculated on September 29, 2018 scored 7.4 High in the YEW DSI with 11 of the total 17 indicators scoring more than the baseline coping capacity. The same YEW DSI score of 7.4 was scored on the earthquake and tsunami at Palu, with 13 of the total 17 indicators scoring more than baseline ability to cope within local capacity. Impact analysis reports were sent to relevant authorities on September 30, 2018.

    DISCUSSION & CONCLUSION: A State of Emergency was declared for a national response, which indicated an inability to cope within the local capacity, shown by the YEW DSI. The strong correlation between the earthquake magnitude, intensities, and the humanitarian impact at Donggala and Palu reported could be added into the science of knowledge in prehospital care and disaster medicine research and practice. As a conclusion, the real-time disaster response was found to be almost an exact fit with the YEW DSI indicators, demonstrating the inability to cope within the local capacity.

    Matched MeSH terms: Benchmarking*
  17. El-Hassan O, Sharif A, Al Redha M, Blair I
    PMID: 29295053
    In the United Arab Emirates (UAE), health services have developed greatly in the past 40 years. To ensure they continue to meet the needs of the population, innovation and change are required including investment in a strong e-Health infrastructure with a single transferrable electronic patient record. In this paper, using the Emirate of Dubai as a case study, we report on the Middle East Electronic Medical Record Adoption Model (EMRAM). Between 2011-2016, the number of participating hospitals has increased from 23 to 33. Currently, while 20/33 of hospitals are at Stage 2 or less, 10/33 have reached Stage 5. Also Dubai's median EMRAM score in 2016 (2.5) was higher than the scores reported from Australia (2.2), New Zealand (2.3), Malaysia (0.06), the Philippines (0.06) and Thailand (0.5). EMRAM has allowed the tracking of the progress being made by healthcare facilities in Dubai towards upgrading their information technology infrastructure and the introduction of electronic medical records.
    Matched MeSH terms: Benchmarking*
  18. Alsalem MA, Zaidan AA, Zaidan BB, Hashim M, Albahri OS, Albahri AS, et al.
    J Med Syst, 2018 Sep 19;42(11):204.
    PMID: 30232632 DOI: 10.1007/s10916-018-1064-9
    This study aims to systematically review prior research on the evaluation and benchmarking of automated acute leukaemia classification tasks. The review depends on three reliable search engines: ScienceDirect, Web of Science and IEEE Xplore. A research taxonomy developed for the review considers a wide perspective for automated detection and classification of acute leukaemia research and reflects the usage trends in the evaluation criteria in this field. The developed taxonomy consists of three main research directions in this domain. The taxonomy involves two phases. The first phase includes all three research directions. The second one demonstrates all the criteria used for evaluating acute leukaemia classification. The final set of studies includes 83 investigations, most of which focused on enhancing the accuracy and performance of detection and classification through proposed methods or systems. Few efforts were made to undertake the evaluation issues. According to the final set of articles, three groups of articles represented the main research directions in this domain: 56 articles highlighted the proposed methods, 22 articles involved proposals for system development and 5 papers centred on evaluation and comparison. The other taxonomy side included 16 main and sub-evaluation and benchmarking criteria. This review highlights three serious issues in the evaluation and benchmarking of multiclass classification of acute leukaemia, namely, conflicting criteria, evaluation criteria and criteria importance. It also determines the weakness of benchmarking tools. To solve these issues, multicriteria decision-making (MCDM) analysis techniques were proposed as effective recommended solutions in the methodological aspect. This methodological aspect involves a proposed decision support system based on MCDM for evaluation and benchmarking to select suitable multiclass classification models for acute leukaemia. The said support system is examined and has three sequential phases. Phase One presents the identification procedure and process for establishing a decision matrix based on a crossover of evaluation criteria and acute leukaemia multiclass classification models. Phase Two describes the decision matrix development for the selection of acute leukaemia classification models based on the integrated Best and worst method (BWM) and VIKOR. Phase Three entails the validation of the proposed system.
    Matched MeSH terms: Benchmarking*
  19. Shamsan Saleh AM, Ali BM, Rasid MF, Ismail A
    Sensors (Basel), 2012;12(8):11307-33.
    PMID: 23112658 DOI: 10.3390/s120811307
    Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.
    Matched MeSH terms: Benchmarking
  20. Darzi S, Islam MT, Tiong SK, Kibria S, Singh M
    PLoS One, 2015;10(11):e0140526.
    PMID: 26552032 DOI: 10.1371/journal.pone.0140526
    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA.
    Matched MeSH terms: Benchmarking
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links