Displaying publications 1 - 20 of 23 in total

Abstract:
Sort:
  1. Lee S, Abdullah A, Jhanjhi N, Kok S
    PeerJ Comput Sci, 2021;7:e350.
    PMID: 33817000 DOI: 10.7717/peerj-cs.350
    The Industrial Revolution 4.0 began with the breakthrough technological advances in 5G, and artificial intelligence has innovatively transformed the manufacturing industry from digitalization and automation to the new era of smart factories. A smart factory can do not only more than just produce products in a digital and automatic system, but also is able to optimize the production on its own by integrating production with process management, service distribution, and customized product requirement. A big challenge to the smart factory is to ensure that its network security can counteract with any cyber attacks such as botnet and Distributed Denial of Service, They are recognized to cause serious interruption in production, and consequently economic losses for company producers. Among many security solutions, botnet detection using honeypot has shown to be effective in some investigation studies. It is a method of detecting botnet attackers by intentionally creating a resource within the network with the purpose of closely monitoring and acquiring botnet attacking behaviors. For the first time, a proposed model of botnet detection was experimented by combing honeypot with machine learning to classify botnet attacks. A mimicking smart factory environment was created on IoT device hardware configuration. Experimental results showed that the model performance gave a high accuracy of above 96%, with very fast time taken of just 0.1 ms and false positive rate at 0.24127 using random forest algorithm with Weka machine learning program. Hence, the honeypot combined machine learning model in this study was proved to be highly feasible to apply in the security network of smart factory to detect botnet attacks.
  2. A Almusaylim Z, Jhanjhi NZ, Alhumam A
    Sensors (Basel), 2020 Oct 22;20(21).
    PMID: 33105891 DOI: 10.3390/s20215997
    The rapid growth of the Internet of Things (IoT) and the massive propagation of wireless technologies has revealed recent opportunities for development in various domains of real life, such as smart cities and E-Health applications. A slight defense against different forms of attack is offered for the current secure and lightweight Routing Protocol for Low Power and Lossy Networks (RPL) of IoT resource-constrained devices. Data packets are highly likely to be exposed in transmission during data packet routing. The RPL rank and version number attacks, which are two forms of RPL attacks, can have critical consequences for RPL networks. The studies conducted on these attacks have several security defects and performance shortcomings. In this research, we propose a Secure RPL Routing Protocol (SRPL-RP) for rank and version number attacks. This mainly detects, mitigates, and isolates attacks in RPL networks. The detection is based on a comparison of the rank strategy. The mitigation uses threshold and attack status tables, and the isolation adds them to a blacklist table and alerts nodes to skip them. SRPL-RP supports diverse types of network topologies and is comprehensively analyzed with multiple studies, such as Standard RPL with Attacks, Sink-Based Intrusion Detection Systems (SBIDS), and RPL+Shield. The analysis results showed that the SRPL-RP achieved significant improvements with a Packet Delivery Ratio (PDR) of 98.48%, a control message value of 991 packets/second, and an average energy consumption of 1231.75 joules. SRPL-RP provided a better accuracy rate of 98.30% under the attacks.
  3. Zerdoumi S, Hashem IAT, Jhanjhi NZ
    Multimed Tools Appl, 2022 Jan 29.
    PMID: 35125925 DOI: 10.1007/s11042-021-11339-4
    A growing amount of research conducted in digital, cooperative with advances in Artificial Intelligence, Computer Vision including Machine learning, has managed to the advance of progressive techniques that aim to detect and process affective information contained in multi-modal evidences. This research intends to bring together for theoreticians and practitioners from academic fields, professionals and industries and extends to be visualizing cries such epidemic, votes, social Phenomena in spherical representation interactive model working in the broad range of topics relevant to multi - modal data processing and forensics tools developing. Furthermore, progress has been made in this research besides that in this research conducted progression of mapping claims in present epoch necessitate the capacities of virtual guide of any understandable Geo-Visualization of spatial features that talented to convert the quantities of spatial pattern into cartography. The enlargement of a novel approaches fit for visualization of spatial pattern constituencies Starting exclusive Input Set of object O, set associated with feature F for regenerating Output the set C , interested region I special target C Even so, as indicated by the construction of the prototype as listed earlier in this thread, does it have the incentive for improvements: Representation could be used by Google Earth can Using Project enhancement representation whereby provides a 3D or 4D interaction with life measures with a view to cartography. In addition, the initiative suggests that a tool not accessible for disseminating information to the public can be addressed by the use of online mapping, which fuses with trends visualization for political circles and electors. But as mentioned above the framework is developed and it's also possible in the current example, for improvements: The project's representation 3D or 4D interacting Earth can use measures of life Earth From the map viewpoint. That's what that says. That means that. Which just means. Developers have concerns that. So it. Designers concern about that. This study supports the new, multi - demission and deployed countries in conjunction with another data is processed. Comprehensive, well-interpreted source data for the Data like Malaysia Jabatan Pendaftaran (JPN).
  4. Alkinani MH, Almazroi AA, Jhanjhi NZ, Khan NA
    Sensors (Basel), 2021 Oct 18;21(20).
    PMID: 34696118 DOI: 10.3390/s21206905
    Internet of Things (IoT) and 5G are enabling intelligent transportation systems (ITSs). ITSs promise to improve road safety in smart cities. Therefore, ITSs are gaining earnest devotion in the industry as well as in academics. Due to the rapid increase in population, vehicle numbers are increasing, resulting in a large number of road accidents. The majority of the time, casualties are not appropriately discovered and reported to hospitals and relatives. This lack of rapid care and first aid might result in life loss in a matter of minutes. To address all of these challenges, an intelligent system is necessary. Although several information communication technologies (ICT)-based solutions for accident detection and rescue operations have been proposed, these solutions are not compatible with all vehicles and are also costly. Therefore, we proposed a reporting and accident detection system (RAD) for a smart city that is compatible with any vehicle and less expensive. Our strategy aims to improve the transportation system at a low cost. In this context, we developed an android application that collects data related to sound, gravitational force, pressure, speed, and location of the accident from the smartphone. The value of speed helps to improve the accident detection accuracy. The collected information is further processed for accident identification. Additionally, a navigation system is designed to inform the relatives, police station, and the nearest hospital. The hospital dispatches UAV (i.e., drone with first aid box) and ambulance to the accident spot. The actual dataset from the Road Safety Open Repository is used for results generation through simulation. The proposed scheme shows promising results in terms of accuracy and response time as compared to existing techniques.
  5. Alwakid G, Gouda W, Humayun M, Jhanjhi NZ
    Digit Health, 2023;9:20552076231203676.
    PMID: 37766903 DOI: 10.1177/20552076231203676
    Prolonged hyperglycemia can cause diabetic retinopathy (DR), which is a major contributor to blindness. Numerous incidences of DR may be avoided if it were identified and addressed promptly. Throughout recent years, many deep learning (DL)-based algorithms have been proposed to facilitate psychometric testing. Utilizing DL model that encompassed four scenarios, DR and its stages were identified in this study using retinal scans from the "Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 Blindness Detection" dataset. Adopting a DL model then led to the use of augmentation strategies that produced a comprehensive dataset with consistent hyper parameters across all test cases. As a further step in the classification process, we used a Convolutional Neural Network model. Different enhancement methods have been used to raise visual quality. The proposed approach detected the DR with a highest experimental result of 97.83%, a top-2 accuracy of 99.31%, and a top-3 accuracy of 99.88% across all the 5 severity stages of the APTOS 2019 evaluation employing CLAHE and ESRGAN techniques for image enhancement. In addition, we employed APTOS 2019 to develop a set of evaluation metrics (precision, recall, and F1-score) to use in analyzing the efficacy of the suggested model. The proposed approach was also proven to be more efficient at DR location than both state-of-the-art technology and conventional DL.
  6. Alwakid G, Gouda W, Humayun M, Jhanjhi NZ
    Diagnostics (Basel), 2023 May 22;13(10).
    PMID: 37238299 DOI: 10.3390/diagnostics13101815
    When it comes to skin tumors and cancers, melanoma ranks among the most prevalent and deadly. With the advancement of deep learning and computer vision, it is now possible to quickly and accurately determine whether or not a patient has malignancy. This is significant since a prompt identification greatly decreases the likelihood of a fatal outcome. Artificial intelligence has the potential to improve healthcare in many ways, including melanoma diagnosis. In a nutshell, this research employed an Inception-V3 and InceptionResnet-V2 strategy for melanoma recognition. The feature extraction layers that were previously frozen were fine-tuned after the newly added top layers were trained. This study used data from the HAM10000 dataset, which included an unrepresentative sample of seven different forms of skin cancer. To fix the discrepancy, we utilized data augmentation. The proposed models outperformed the results of the previous investigation with an effectiveness of 0.89 for Inception-V3 and 0.91 for InceptionResnet-V2.
  7. Gaur L, Bhatia U, Jhanjhi NZ, Muhammad G, Masud M
    Multimed Syst, 2023;29(3):1729-1738.
    PMID: 33935377 DOI: 10.1007/s00530-021-00794-6
    The demand for automatic detection of Novel Coronavirus or COVID-19 is increasing across the globe. The exponential rise in cases burdens healthcare facilities, and a vast amount of multimedia healthcare data is being explored to find a solution. This study presents a practical solution to detect COVID-19 from chest X-rays while distinguishing those from normal and impacted by Viral Pneumonia via Deep Convolution Neural Networks (CNN). In this study, three pre-trained CNN models (EfficientNetB0, VGG16, and InceptionV3) are evaluated through transfer learning. The rationale for selecting these specific models is their balance of accuracy and efficiency with fewer parameters suitable for mobile applications. The dataset used for the study is publicly available and compiled from different sources. This study uses deep learning techniques and performance metrics (accuracy, recall, specificity, precision, and F1 scores). The results show that the proposed approach produced a high-quality model, with an overall accuracy of 92.93%, COVID-19, a sensitivity of 94.79%. The work indicates a definite possibility to implement computer vision design to enable effective detection and screening measures.
  8. Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M
    Healthcare (Basel), 2023 Jun 15;11(12).
    PMID: 37372880 DOI: 10.3390/healthcare11121762
    Electronic health records (EHRs) are an increasingly important source of information for healthcare professionals and researchers. However, EHRs are often fragmented, unstructured, and difficult to analyze due to the heterogeneity of the data sources and the sheer volume of information. Knowledge graphs have emerged as a powerful tool for capturing and representing complex relationships within large datasets. In this study, we explore the use of knowledge graphs to capture and represent complex relationships within EHRs. Specifically, we address the following research question: Can a knowledge graph created using the MIMIC III dataset and GraphDB effectively capture semantic relationships within EHRs and enable more efficient and accurate data analysis? We map the MIMIC III dataset to an ontology using text refinement and Protege; then, we create a knowledge graph using GraphDB and use SPARQL queries to retrieve and analyze information from the graph. Our results demonstrate that knowledge graphs can effectively capture semantic relationships within EHRs, enabling more efficient and accurate data analysis. We provide examples of how our implementation can be used to analyze patient outcomes and identify potential risk factors. Our results demonstrate that knowledge graphs are an effective tool for capturing semantic relationships within EHRs, enabling a more efficient and accurate data analysis. Our implementation provides valuable insights into patient outcomes and potential risk factors, contributing to the growing body of literature on the use of knowledge graphs in healthcare. In particular, our study highlights the potential of knowledge graphs to support decision-making and improve patient outcomes by enabling a more comprehensive and holistic analysis of EHR data. Overall, our research contributes to a better understanding of the value of knowledge graphs in healthcare and lays the foundation for further research in this area.
  9. Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M
    Diagnostics (Basel), 2023 Jul 05;13(13).
    PMID: 37443674 DOI: 10.3390/diagnostics13132280
    Cell counting in fluorescence microscopy is an essential task in biomedical research for analyzing cellular dynamics and studying disease progression. Traditional methods for cell counting involve manual counting or threshold-based segmentation, which are time-consuming and prone to human error. Recently, deep learning-based object detection methods have shown promising results in automating cell counting tasks. However, the existing methods mainly focus on segmentation-based techniques that require a large amount of labeled data and extensive computational resources. In this paper, we propose a novel approach to detect and count multiple-size cells in a fluorescence image slide using You Only Look Once version 5 (YOLOv5) with a feature pyramid network (FPN). Our proposed method can efficiently detect multiple cells with different sizes in a single image, eliminating the need for pixel-level segmentation. We show that our method outperforms state-of-the-art segmentation-based approaches in terms of accuracy and computational efficiency. The experimental results on publicly available datasets demonstrate that our proposed approach achieves an average precision of 0.8 and a processing time of 43.9 ms per image. Our approach addresses the research gap in the literature by providing a more efficient and accurate method for cell counting in fluorescence microscopy that requires less computational resources and labeled data.
  10. Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M
    Diagnostics (Basel), 2023 Jul 21;13(14).
    PMID: 37510187 DOI: 10.3390/diagnostics13142442
    Atrial fibrillation is a prevalent cardiac arrhythmia that poses significant health risks to patients. The use of non-invasive methods for AF detection, such as Electrocardiogram and Photoplethysmogram, has gained attention due to their accessibility and ease of use. However, there are challenges associated with ECG-based AF detection, and the significance of PPG signals in this context has been increasingly recognized. The limitations of ECG and the untapped potential of PPG are taken into account as this work attempts to classify AF and non-AF using PPG time series data and deep learning. In this work, we emploted a hybrid deep neural network comprising of 1D CNN and BiLSTM for the task of AF classification. We addressed the under-researched area of applying deep learning methods to transmissive PPG signals by proposing a novel approach. Our approach involved integrating ECG and PPG signals as multi-featured time series data and training deep learning models for AF classification. Our hybrid 1D CNN and BiLSTM model achieved an accuracy of 95% on test data in identifying atrial fibrillation, showcasing its strong performance and reliable predictive capabilities. Furthermore, we evaluated the performance of our model using additional metrics. The precision of our classification model was measured at 0.88, indicating its ability to accurately identify true positive cases of AF. The recall, or sensitivity, was measured at 0.85, illustrating the model's capacity to detect a high proportion of actual AF cases. Additionally, the F1 score, which combines both precision and recall, was calculated at 0.84, highlighting the overall effectiveness of our model in classifying AF and non-AF cases.
  11. Alwakid G, Gouda W, Humayun M, Jhanjhi NZ
    Digit Health, 2023;9:20552076231194942.
    PMID: 37588156 DOI: 10.1177/20552076231194942
    OBJECTIVE: Diabetic retinopathy (DR) can sometimes be treated and prevented from causing irreversible vision loss if caught and treated properly. In this work, a deep learning (DL) model is employed to accurately identify all five stages of DR.

    METHODS: The suggested methodology presents two examples, one with and one without picture augmentation. A balanced dataset meeting the same criteria in both cases is then generated using augmentative methods. The DenseNet-121-rendered model on the Asia Pacific Tele-Ophthalmology Society (APTOS) and dataset for diabetic retinopathy (DDR) datasets performed exceptionally well when compared to other methods for identifying the five stages of DR.

    RESULTS: Our propose model achieved the highest test accuracy of 98.36%, top-2 accuracy of 100%, and top-3 accuracy of 100% for the APTOS dataset, and the highest test accuracy of 79.67%, top-2 accuracy of 92.%76, and top-3 accuracy of 98.94% for the DDR dataset. Additional criteria (precision, recall, and F1-score) for gauging the efficacy of the proposed model were established with the help of APTOS and DDR.

    CONCLUSIONS: It was discovered that feeding a model with higher-quality photographs increased its efficiency and ability for learning, as opposed to both state-of-the-art technology and the other, non-enhanced model.

  12. Zerdoumi S, Jhanjhi NZ, Ariyaluran Habeeb RA, Hashem IAT
    PeerJ Comput Sci, 2023;9:e1465.
    PMID: 38192476 DOI: 10.7717/peerj-cs.1465
    Based on the results of this research, a new method for separating Arabic offline text is presented. This method finds the core splitter between the "Middle" and "Lower" zones by looking for sharp character degeneration in those zones. With the exception of script localization and the essential feature of determining which direction a starting point is pointing, the baseline also functions as a delimiter for horizontal projections. Despite the fact that the bottom half of the characteristics is utilized to differentiate the modifiers in zones, the top half of the characteristics is not. This method works best when the baseline is able to divide features into the bottom zone and the middle zone in a complex pattern where it is hard to find the alphabet, like in ancient scripts. Furthermore, this technique performed well when it came to distinguishing Arabic text, including calligraphy. With the zoning system, the aim is to decrease the number of different element classes that are associated with the total number of alphabets used in Arabic cursive writing. The components are identified using the pixel value origin and center reign (CR) technique, which is combined with letter morphology to achieve complete word-level identification. Using the upper baseline and lower baseline together, this proposed technique produces a consistent Arabic pattern, which is intended to improve identification rates by increasing the number of matches. For Mediterranean keywords (cities in Algeria and Tunisia), the suggested approach makes use of indicators that the correctness of the Othmani and Arabic scripts is greater than 98.14 percent and 90.16 percent, respectively, based on 84 and 117 verses. As a consequence of the auditing method and the assessment section's structure and software, the major problems were identified, with a few of them being specifically highlighted.
  13. Mushtaq M, Ullah A, Ashraf H, Jhanjhi NZ, Masud M, Alqhatani A, et al.
    Sensors (Basel), 2023 May 31;23(11).
    PMID: 37299944 DOI: 10.3390/s23115217
    The Internet of vehicles (IoVs) is an innovative paradigm which ensures a safe journey by communicating with other vehicles. It involves a basic safety message (BSM) that contains sensitive information in a plain text that can be subverted by an adversary. To reduce such attacks, a pool of pseudonyms is allotted which are changed regularly in different zones or contexts. In base schemes, the BSM is sent to neighbors just by considering their speed. However, this parameter is not enough because network topology is very dynamic and vehicles can change their route at any time. This problem increases pseudonym consumption which ultimately increases communication overhead, increases traceability and has high BSM loss. This paper presents an efficient pseudonym consumption protocol (EPCP) which considers the vehicles in the same direction, and similar estimated location. The BSM is shared only to these relevant vehicles. The performance of the purposed scheme in contrast to base schemes is validated via extensive simulations. The results prove that the proposed EPCP technique outperformed compared to its counterparts in terms of pseudonym consumption, BSM loss rate and achieved traceability.
  14. Jabeen T, Jabeen I, Ashraf H, Jhanjhi NZ, Yassine A, Hossain MS
    Sensors (Basel), 2023 May 25;23(11).
    PMID: 37299782 DOI: 10.3390/s23115055
    The Internet of Things (IoT) uses wireless networks without infrastructure to install a huge number of wireless sensors that track system, physical, and environmental factors. There are a variety of WSN uses, and some well-known application factors include energy consumption and lifespan duration for routing purposes. The sensors have detecting, processing, and communication capabilities. In this paper, an intelligent healthcare system is proposed which consists of nano sensors that collect real-time health status and transfer it to the doctor's server. Time consumption and various attacks are major concerns, and some existing techniques contain stumbling blocks. Therefore, in this research, a genetic-based encryption method is advocated to protect data transmitted over a wireless channel using sensors to avoid an uncomfortable data transmission environment. An authentication procedure is also proposed for legitimate users to access the data channel. Results show that the proposed algorithm is lightweight and energy efficient, and time consumption is 90% lower with a higher security ratio.
  15. Gandam A, Sidhu JS, Verma S, Jhanjhi NZ, Nayyar A, Abouhawwash M, et al.
    PLoS One, 2021;16(5):e0250959.
    PMID: 33970949 DOI: 10.1371/journal.pone.0250959
    Compression at a very low bit rate(≤0.5bpp) causes degradation in video frames with standard decoding algorithms like H.261, H.262, H.264, and MPEG-1 and MPEG-4, which itself produces lots of artifacts. This paper focuses on an efficient pre-and post-processing technique (PP-AFT) to address and rectify the problems of quantization error, ringing, blocking artifact, and flickering effect, which significantly degrade the visual quality of video frames. The PP-AFT method differentiates the blocked images or frames using activity function into different regions and developed adaptive filters as per the classified region. The designed process also introduces an adaptive flicker extraction and removal method and a 2-D filter to remove ringing effects in edge regions. The PP-AFT technique is implemented on various videos, and results are compared with different existing techniques using performance metrics like PSNR-B, MSSIM, and GBIM. Simulation results show significant improvement in the subjective quality of different video frames. The proposed method outperforms state-of-the-art de-blocking methods in terms of PSNR-B with average value lying between (0.7-1.9db) while (35.83-47.7%) reduced average GBIM keeping MSSIM values very close to the original sequence statistically 0.978.
  16. Ramanjot, Mittal U, Wadhawan A, Singla J, Jhanjhi NZ, Ghoniem RM, et al.
    Sensors (Basel), 2023 May 15;23(10).
    PMID: 37430683 DOI: 10.3390/s23104769
    A significant majority of the population in India makes their living through agriculture. Different illnesses that develop due to changing weather patterns and are caused by pathogenic organisms impact the yields of diverse plant species. The present article analyzed some of the existing techniques in terms of data sources, pre-processing techniques, feature extraction techniques, data augmentation techniques, models utilized for detecting and classifying diseases that affect the plant, how the quality of images was enhanced, how overfitting of the model was reduced, and accuracy. The research papers for this study were selected using various keywords from peer-reviewed publications from various databases published between 2010 and 2022. A total of 182 papers were identified and reviewed for their direct relevance to plant disease detection and classification, of which 75 papers were selected for this review after exclusion based on the title, abstract, conclusion, and full text. Researchers will find this work to be a useful resource in recognizing the potential of various existing techniques through data-driven approaches while identifying plant diseases by enhancing system performance and accuracy.
  17. Jabeen T, Jabeen I, Ashraf H, Ullah A, Jhanjhi NZ, Ghoniem RM, et al.
    Sensors (Basel), 2023 Jul 02;23(13).
    PMID: 37447952 DOI: 10.3390/s23136104
    Programmable Object Interfaces are increasingly intriguing researchers because of their broader applications, especially in the medical field. In a Wireless Body Area Network (WBAN), for example, patients' health can be monitored using clinical nano sensors. Exchanging such sensitive data requires a high level of security and protection against attacks. To that end, the literature is rich with security schemes that include the advanced encryption standard, secure hashing algorithm, and digital signatures that aim to secure the data exchange. However, such schemes elevate the time complexity, rendering the data transmission slower. Cognitive radio technology with a medical body area network system involves communication links between WBAN gateways, server and nano sensors, which renders the entire system vulnerable to security attacks. In this paper, a novel DNA-based encryption technique is proposed to secure medical data sharing between sensing devices and central repositories. It has less computational time throughout authentication, encryption, and decryption. Our analysis of experimental attack scenarios shows that our technique is better than its counterparts.
  18. Menon S, Anand D, Kavita, Verma S, Kaur M, Jhanjhi NZ, et al.
    Sensors (Basel), 2023 Jul 04;23(13).
    PMID: 37447981 DOI: 10.3390/s23136132
    With the increasing growth rate of smart home devices and their interconnectivity via the Internet of Things (IoT), security threats to the communication network have become a concern. This paper proposes a learning engine for a smart home communication network that utilizes blockchain-based secure communication and a cloud-based data evaluation layer to segregate and rank data on the basis of three broad categories of Transactions (T), namely Smart T, Mod T, and Avoid T. The learning engine utilizes a neural network for the training and classification of the categories that helps the blockchain layer with improvisation in the decision-making process. The contributions of this paper include the application of a secure blockchain layer for user authentication and the generation of a ledger for the communication network; the utilization of the cloud-based data evaluation layer; the enhancement of an SI-based algorithm for training; and the utilization of a neural engine for the precise training and classification of categories. The proposed algorithm outperformed the Fused Real-Time Sequential Deep Extreme Learning Machine (RTS-DELM) system, the data fusion technique, and artificial intelligence Internet of Things technology in providing electronic information engineering and analyzing optimization schemes in terms of the computation complexity, false authentication rate, and qualitative parameters with a lower average computation complexity; in addition, it ensures a secure, efficient smart home communication network to enhance the lifestyle of human beings.
  19. Wassan S, Dongyan H, Suhail B, Jhanjhi NZ, Xiao G, Ahmed S, et al.
    Digit Health, 2024;10:20552076231220123.
    PMID: 38250147 DOI: 10.1177/20552076231220123
    BACKGROUND: Deep Learning is an AI technology that trains computers to analyze data in an approach similar to the human brain. Deep learning algorithms can find complex patterns in images, text, audio, and other data types to provide accurate predictions and conclusions. Neuronal networks are another name for Deep Learning. These layers are the input, the hidden, and the output of a deep learning model. First, data is taken in by the input layer, and then it is processed by the output layer. Deep Learning has many advantages over traditional machine learning algorithms like a KA-nearest neighbor, support vector algorithms, and regression approaches. Deep learning models can read more complex data than traditional machine learning methods.

    OBJECTIVES: This research aims to find the ideal number of best-hidden layers for the neural network and different activation function variations. The article also thoroughly analyzes how various frameworks can be used to create a comparison or fast neural networks. The final goal of the article is to investigate all such innovative techniques that allow us to speed up the training of neural networks without losing accuracy.

    METHODS: A sample data Set from 2001 was collected by www.Kaggle.com. We can reduce the total number of layers in the deep learning model. This will enable us to use our time. To perform the ReLU activation, we will make use of two layers that are completely connected. If the value being supplied is larger than zero, the ReLU activation will return 0, and else it will output the value being input directly.

    RESULTS: We use multiple parameters to determine the most effective method to test how well our method works. In the next paragraph, we'll discuss how the calculation changes secret-shared Values. By adopting 19 train set features, we train our reliable model to predict healthcare cost's (numerical) target feature. We found that 0.89503 was the best choice because it gave us a good fit (R2) and let us set enough coefficients to 0. To develop our stable model with this Set of parameters, we require 26 iterations. We use an R2 of 0.89503, an MSE of 0.01094, an RMSE of 0.10458, a mean residual deviance of 0.01094, a mean absolute error of 0.07452, and a root mean squared log error of 0.07207. After training the model on the train set, we applied the same parameters to the test set and obtained an R2 of 0.90707, MSE of 0.01045, RMSE of 0.10224, mean residual deviation of 0.01045, MAE of 0.06954, and RMSE of 0.07051, validating our solution approach. The objective value of our secured model is higher than that of the scikit-learn model, although the former performs better on goodness-of-fit criteria. As a result, our protected model performs quite well, marginally outperforming the (very optimized) scikit-learn model. Using a backpropagation algorithm and stochastic gradient descent, deep Learning develops artificial neural systems with several interconnected layers. There may be hidden layers of neurons in the network that have the tanh, rectification, and max-out hyperparameters. Modern features like momentum training, dropout, active learning rate, rate annealed, and L1 or L2 regularization provide exceptional prediction performance. The worldwide model's parameters are multi-threadedly (asynchronously) trained on the data from that node, and the model-based data is then gradually augmented by model averaging over the entire network. The method is executed on a single-node, direct H2O cluster initiated by the operator. The operation is parallel despite there just being a single node involved. The number of threads may be adjusted in the settings menu under Preferences and General. The optimal number of threads for the system is used automatically. Successful predictions in the healthcare data sets are made using the H2O Deep Learning operator. There will be a classification done since its label is binomial. The Splitting Validation operator creates test and training datasets to evaluate the model. By default, the settings of the Deep Learning activator are used. To put it another way, we'll construct two hidden layers, each containing 50 neurons. The Accuracy measure is computed by linking the annotated Sample Set with a Performer (Binominal Classification) operator. Table 3 displays the Deep Learning Model, the labeled data, and the Performance Vector that resulted from the technique.

    CONCLUSIONS: Deep learning algorithms can be used to design systems that report data on patients and deliver warnings to medical applications or electronic health information if there are changes in the patient's health. These systems could be created using deep Learning. This helps verify that patients get the proper effective care at the proper time for each specific patient. A healthcare decision support system was presented using the Internet of Things and deep learning methods. In the proposed system, we examined the capability of integrating deep learning technology into automatic diagnosis and IoT capabilities for faster message exchange over the Internet. We have selected the suitable Neural Network structure (number of best-hidden layers and activation function classes) to construct the e-health system. In addition, the e-health system relied on data from doctors to understand the Neural Network. In the validation method, the total evaluation of the proposed healthcare system for diagnostics provides dependability under various patient conditions. Based on evaluation and simulation findings, a dual hidden layer of feed-forward NN and its neurons store the tanh function more effectively than other NN. To overcome challenges, this study will integrate artificial intelligence with IoT. This study aims to determine the NN's optimal layer counts and activation function variations.

  20. Mughal MA, Ullah A, Yu X, He W, Jhanjhi NZ, Ray SK
    Heliyon, 2024 Apr 15;10(7):e27177.
    PMID: 38601685 DOI: 10.1016/j.heliyon.2024.e27177
    The Internet of Things (IoT) is a network of intelligent devices especially in healthcare-based systems. Internet of Medical Things (IoMT) uses wearable sensors to collect data and transmit to central repositories. The security and privacy of healthcare data is a challenging task. The aim of the study is to provide a secure data sharing mechanism. The existing studies provide secure data sharing schemes but still have limitations in terms of hiding the patient identify in the messages exchanged to upload the data on central repositories. This paper presents a Secure Aggregated Data Collection and Transmission (SADCT) that provides anonymity for the identities of patient's mobile device and the intermediate fog nodes. Our system involves an authenticated server for node registration and authentication by saving security credentials. The proposed scheme presents the novel data aggregation algorithm at the mobile device, and the data extraction algorithm at the fog node. The work is validated through extensive simulations in NS along with a security analysis. Results prove the supremacy of SADCT in terms of energy consumption, storage, communication, and computational costs.
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links