Displaying all 6 publications

Abstract:
Sort:
  1. Sajid MR, Muhammad N, Zakaria R, Shahbaz A, Bukhari SAC, Kadry S, et al.
    Interdiscip Sci, 2021 Jun;13(2):201-211.
    PMID: 33675528 DOI: 10.1007/s12539-021-00423-w
    BACKGROUND: In the broader healthcare domain, the prediction bears more value than an explanation considering the cost of delays in its services. There are various risk prediction models for cardiovascular diseases (CVDs) in the literature for early risk assessment. However, the substantial increase in CVDs-related mortality is challenging global health systems, especially in developing countries. This situation allows researchers to improve CVDs prediction models using new features and risk computing methods. This study aims to assess nonclinical features that can be easily available in any healthcare systems, in predicting CVDs using advanced and flexible machine learning (ML) algorithms.

    METHODS: A gender-matched case-control study was conducted in the largest public sector cardiac hospital of Pakistan, and the data of 460 subjects were collected. The dataset comprised of eight nonclinical features. Four supervised ML algorithms were used to train and test the models to predict the CVDs status by considering traditional logistic regression (LR) as the baseline model. The models were validated through the train-test split (70:30) and tenfold cross-validation approaches.

    RESULTS: Random forest (RF), a nonlinear ML algorithm, performed better than other ML algorithms and LR. The area under the curve (AUC) of RF was 0.851 and 0.853 in the train-test split and tenfold cross-validation approach, respectively. The nonclinical features yielded an admissible accuracy (minimum 71%) through the LR and ML models, exhibiting its predictive capability in risk estimation.

    CONCLUSION: The satisfactory performance of nonclinical features reveals that these features and flexible computational methodologies can reinforce the existing risk prediction models for better healthcare services.

  2. Lakhan A, Abed Mohammed M, Kadry S, Hameed Abdulkareem K, Taha Al-Dhief F, Hsu CH
    PeerJ Comput Sci, 2021;7:e758.
    PMID: 34901423 DOI: 10.7717/peerj-cs.758
    The intelligent reflecting surface (IRS) is a ground-breaking technology that can boost the efficiency of wireless data transmission systems. Specifically, the wireless signal transmitting environment is reconfigured by adjusting a large number of small reflecting units simultaneously. Therefore, intelligent reflecting surface (IRS) has been suggested as a possible solution for improving several aspects of future wireless communication. However, individual nodes are empowered in IRS, but decisions and learning of data are still made by the centralized node in the IRS mechanism. Whereas, in previous works, the problem of energy-efficient and delayed awareness learning IRS-assisted communications has been largely overlooked. The federated learning aware Intelligent Reconfigurable Surface Task Scheduling schemes (FL-IRSTS) algorithm is proposed in this paper to achieve high-speed communication with energy and delay efficient offloading and scheduling. The training of models is divided into different nodes. Therefore, the trained model will decide the IRSTS configuration that best meets the goals in terms of communication rate. Multiple local models trained with the local healthcare fog-cloud network for each workload using federated learning (FL) to generate a global model. Then, each trained model shared its initial configuration with the global model for the next training round. Each application's healthcare data is handled and processed locally during the training process. Simulation results show that the proposed algorithm's achievable rate output can effectively approach centralized machine learning (ML) while meeting the study's energy and delay objectives.
  3. Mohammed MA, Al-Khateeb B, Yousif M, Mostafa SA, Kadry S, Abdulkareem KH, et al.
    Comput Intell Neurosci, 2022;2022:1307944.
    PMID: 35996653 DOI: 10.1155/2022/1307944
    Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.
  4. Abdi Alkareem Alyasseri Z, Alomari OA, Al-Betar MA, Awadallah MA, Hameed Abdulkareem K, Abed Mohammed M, et al.
    Comput Intell Neurosci, 2022;2022:5974634.
    PMID: 35069721 DOI: 10.1155/2022/5974634
    Recently, the electroencephalogram (EEG) signal presents an excellent potential for a new person identification technique. Several studies defined the EEG with unique features, universality, and natural robustness to be used as a new track to prevent spoofing attacks. The EEG signals are a visual recording of the brain's electrical activities, measured by placing electrodes (channels) in various scalp positions. However, traditional EEG-based systems lead to high complexity with many channels, and some channels have critical information for the identification system while others do not. Several studies have proposed a single objective to address the EEG channel for person identification. Unfortunately, these studies only focused on increasing the accuracy rate without balancing the accuracy and the total number of selected EEG channels. The novelty of this paper is to propose a multiobjective binary version of the cuckoo search algorithm (MOBCS-KNN) to find optimal EEG channel selections for person identification. The proposed method (MOBCS-KNN) used a weighted sum technique to implement a multiobjective approach. In addition, a KNN classifier for EEG-based biometric person identification is used. It is worth mentioning that this is the initial investigation of using a multiobjective technique with EEG channel selection problem. A standard EEG motor imagery dataset is used to evaluate the performance of the MOBCS-KNN. The experiments show that the MOBCS-KNN obtained accuracy of 93.86% using only 24 sensors with AR20 autoregressive coefficients. Another critical point is that the MOBCS-KNN finds channels not too close to each other to capture relevant information from all over the head. In conclusion, the MOBCS-KNN algorithm achieves the best results compared with metaheuristic algorithms. Finally, the recommended approach can draw future directions to be applied to different research areas.
  5. Abdulkareem KH, Mostafa SA, Al-Qudsy ZN, Mohammed MA, Al-Waisy AS, Kadry S, et al.
    J Healthc Eng, 2022;2022:5329014.
    PMID: 35368962 DOI: 10.1155/2022/5329014
    Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images.
  6. Mohammed MA, Lakhan A, Abdulkareem KH, Abd Ghani MK, Marhoon HA, Kadry S, et al.
    J Adv Res, 2023 Oct 13.
    PMID: 37839503 DOI: 10.1016/j.jare.2023.10.005
    INTRODUCTION: The Industrial Internet of Water Things (IIoWT) has recently emerged as a leading architecture for efficient water distribution in smart cities. Its primary purpose is to ensure high-quality drinking water for various institutions and households. However, existing IIoWT architecture has many challenges. One of the paramount challenges in achieving data standardization and data fusion across multiple monitoring institutions responsible for assessing water quality and quantity.

    OBJECTIVE: This paper introduces the Industrial Internet of Water Things System for Data Standardization based on Blockchain and Digital Twin Technology. The main objective of this study is to design a new IIoWT architecture where data standardization, interoperability, and data security among different water institutions must be met.

    METHODS: We devise the digital twin-enabled cross-platform environment using the Message Queuing Telemetry Transport (MQTT) protocol to achieve seamless interoperability in heterogeneous computing. In water management, we encounter different types of data from various sensors. Therefore, we propose a CNN-LSTM and blockchain data transactional (BCDT) scheme for processing valid data across different nodes.

    RESULTS: Through simulation results, we demonstrate that the proposed IIoWT architecture significantly reduces processing time while improving the accuracy of data standardization within the water distribution management system.

    CONCLUSION: Overall, this paper presents a comprehensive approach to tackle the challenges of data standardization and security in the IIoWT architecture.

Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links