Displaying publications 1 - 20 of 88 in total

Abstract:
Sort:
  1. Da'u A, Salim N
    PeerJ Comput Sci, 2019;5:e191.
    PMID: 33816844 DOI: 10.7717/peerj-cs.191
    Aspect extraction is a subtask of sentiment analysis that deals with identifying opinion targets in an opinionated text. Existing approaches to aspect extraction typically rely on using handcrafted features, linear and integrated network architectures. Although these methods can achieve good performances, they are time-consuming and often very complicated. In real-life systems, a simple model with competitive results is generally more effective and preferable over complicated models. In this paper, we present a multichannel convolutional neural network for aspect extraction. The model consists of a deep convolutional neural network with two input channels: a word embedding channel which aims to encode semantic information of the words and a part of speech (POS) tag embedding channel to facilitate the sequential tagging process. To get the vector representation of words, we initialized the word embedding channel and the POS channel using pretrained word2vec and one-hot-vector of POS tags, respectively. Both the word embedding and the POS embedding vectors were fed into the convolutional layer and concatenated to a one-dimensional vector, which is finally pooled and processed using a Softmax function for sequence labeling. We finally conducted a series of experiments using four different datasets. The results indicated better performance compared to the baseline models.
  2. Ali T, Jan S, Alkhodre A, Nauman M, Amin M, Siddiqui MS
    PeerJ Comput Sci, 2019;5:e216.
    PMID: 33816869 DOI: 10.7717/peerj-cs.216
    Conventional paper currency and modern electronic currency are two important modes of transactions. In several parts of the world, conventional methodology has clear precedence over its electronic counterpart. However, the identification of forged currency paper notes is now becoming an increasingly crucial problem because of the new and improved tactics employed by counterfeiters. In this paper, a machine assisted system-dubbed DeepMoney-is proposed which has been developed to discriminate fake notes from genuine ones. For this purpose, state-of-the-art models of machine learning called Generative Adversarial Networks (GANs) are employed. GANs use unsupervised learning to train a model that can then be used to perform supervised predictions. This flexibility provides the best of both worlds by allowing unlabelled data to be trained on whilst still making concrete predictions. This technique was applied to Pakistani banknotes. State-of-the-art image processing and feature recognition techniques were used to design the overall approach of a valid input. Augmented samples of images were used in the experiments which show that a high-precision machine can be developed to recognize genuine paper money. An accuracy of 80% has been achieved. The code is available as an open source to allow others to reproduce and build upon the efforts already made.
  3. Zheng S, Rahmat RWO, Khalid F, Nasharuddin NA
    PeerJ Comput Sci, 2019;5:e236.
    PMID: 33816889 DOI: 10.7717/peerj-cs.236
    As the technology for 3D photography has developed rapidly in recent years, an enormous amount of 3D images has been produced, one of the directions of research for which is face recognition. Improving the accuracy of a number of data is crucial in 3D face recognition problems. Traditional machine learning methods can be used to recognize 3D faces, but the face recognition rate has declined rapidly with the increasing number of 3D images. As a result, classifying large amounts of 3D image data is time-consuming, expensive, and inefficient. The deep learning methods have become the focus of attention in the 3D face recognition research. In our experiment, the end-to-end face recognition system based on 3D face texture is proposed, combining the geometric invariants, histogram of oriented gradients and the fine-tuned residual neural networks. The research shows that when the performance is evaluated by the FRGC-v2 dataset, as the fine-tuned ResNet deep neural network layers are increased, the best Top-1 accuracy is up to 98.26% and the Top-2 accuracy is 99.40%. The framework proposed costs less iterations than traditional methods. The analysis suggests that a large number of 3D face data by the proposed recognition framework could significantly improve recognition decisions in realistic 3D face scenarios.
  4. Arfa R, Yusof R, Shabanzadeh P
    PeerJ Comput Sci, 2019;5:e206.
    PMID: 33816859 DOI: 10.7717/peerj-cs.206
    Trajectory clustering and path modelling are two core tasks in intelligent transport systems with a wide range of applications, from modeling drivers' behavior to traffic monitoring of road intersections. Traditional trajectory analysis considers them as separate tasks, where the system first clusters the trajectories into a known number of clusters and then the path taken in each cluster is modelled. However, such a hierarchy does not allow the knowledge of the path model to be used to improve the performance of trajectory clustering. Based on the distance dependent Chinese restaurant process (DDCRP), a trajectory analysis system that simultaneously performs trajectory clustering and path modelling was proposed. Unlike most traditional approaches where the number of clusters should be known, the proposed method decides the number of clusters automatically. The proposed algorithm was tested on two publicly available trajectory datasets, and the experimental results recorded better performance and considerable improvement in both datasets for the task of trajectory clustering compared to traditional approaches. The study proved that the proposed method is an appropriate candidate to be used for trajectory clustering and path modelling.
  5. Mohd Khairuddin I, Sidek SN, P P Abdul Majeed A, Mohd Razman MA, Ahmad Puzi A, Md Yusof H
    PeerJ Comput Sci, 2021;7:e379.
    PMID: 33817026 DOI: 10.7717/peerj-cs.379
    Electromyography (EMG) signal is one of the extensively utilised biological signals for predicting human motor intention, which is an essential element in human-robot collaboration platforms. Studies on motion intention prediction from EMG signals have often been concentrated on either classification and regression models of muscle activity. In this study, we leverage the information from the EMG signals, to detect the subject's intentions in generating motion commands for a robot-assisted upper limb rehabilitation platform. The EMG signals are recorded from ten healthy subjects' biceps muscle, and the movements of the upper limb evaluated are voluntary elbow flexion and extension along the sagittal plane. The signals are filtered through a fifth-order Butterworth filter. A number of features were extracted from the filtered signals namely waveform length (WL), mean absolute value (MAV), root mean square (RMS), standard deviation (SD), minimum (MIN) and maximum (MAX). Several different classifiers viz. Linear Discriminant Analysis (LDA), Logistic Regression (LR), Decision Tree (DT), Support Vector Machine (SVM) and k-Nearest Neighbour (k-NN) were investigated on its efficacy to accurately classify the pre-intention and intention classes based on the significant features identified (MIN and MAX) via Extremely Randomised Tree feature selection technique. It was observed from the present investigation that the DT classifier yielded an excellent classification with a classification accuracy of 100%, 99% and 99% on training, testing and validation dataset, respectively based on the identified features. The findings of the present investigation are non-trivial towards facilitating the rehabilitation phase of patients based on their actual capability and hence, would eventually yield a more active participation from them.
  6. Rashid M, Bari BS, Hasan MJ, Razman MAM, Musa RM, Ab Nasir AF, et al.
    PeerJ Comput Sci, 2021;7:e374.
    PMID: 33817022 DOI: 10.7717/peerj-cs.374
    Brain-computer interface (BCI) is a viable alternative communication strategy for patients of neurological disorders as it facilitates the translation of human intent into device commands. The performance of BCIs primarily depends on the efficacy of the feature extraction and feature selection techniques, as well as the classification algorithms employed. More often than not, high dimensional feature set contains redundant features that may degrade a given classifier's performance. In the present investigation, an ensemble learning-based classification algorithm, namely random subspace k-nearest neighbour (k-NN) has been proposed to classify the motor imagery (MI) data. The common spatial pattern (CSP) has been applied to extract the features from the MI response, and the effectiveness of random forest (RF)-based feature selection algorithm has also been investigated. In order to evaluate the efficacy of the proposed method, an experimental study has been implemented using four publicly available MI dataset (BCI Competition III dataset 1 (data-1), dataset IIIA (data-2), dataset IVA (data-3) and BCI Competition IV dataset II (data-4)). It was shown that the ensemble-based random subspace k-NN approach achieved the superior classification accuracy (CA) of 99.21%, 93.19%, 93.57% and 90.32% for data-1, data-2, data-3 and data-4, respectively against other models evaluated, namely linear discriminant analysis, support vector machine, random forest, Naïve Bayes and the conventional k-NN. In comparison with other classification approaches reported in the recent studies, the proposed method enhanced the accuracy by 2.09% for data-1, 1.29% for data-2, 4.95% for data-3 and 5.71% for data-4, respectively. Moreover, it is worth highlighting that the RF feature selection technique employed in the present study was able to significantly reduce the feature dimension without compromising the overall CA. The outcome from the present study implies that the proposed method may significantly enhance the accuracy of MI data classification.
  7. Tahir N, Asif M, Ahmad S, Malik MSA, Aljuaid H, Butt MA, et al.
    PeerJ Comput Sci, 2021;7:e389.
    PMID: 33817035 DOI: 10.7717/peerj-cs.389
    Keyword extraction is essential in determining influenced keywords from huge documents as the research repositories are becoming massive in volume day by day. The research community is drowning in data and starving for information. The keywords are the words that describe the theme of the whole document in a precise way by consisting of just a few words. Furthermore, many state-of-the-art approaches are available for keyword extraction from a huge collection of documents and are classified into three types, the statistical approaches, machine learning, and graph-based methods. The machine learning approaches require a large training dataset that needs to be developed manually by domain experts, which sometimes is difficult to produce while determining influenced keywords. However, this research focused on enhancing state-of-the-art graph-based methods to extract keywords when the training dataset is unavailable. This research first converted the handcrafted dataset, collected from impact factor journals into n-grams combinations, ranging from unigram to pentagram and also enhanced traditional graph-based approaches. The experiment was conducted on a handcrafted dataset, and all methods were applied on it. Domain experts performed the user study to evaluate the results. The results were observed from every method and were evaluated with the user study using precision, recall and f-measure as evaluation matrices. The results showed that the proposed method (FNG-IE) performed well and scored near the machine learning approaches score.
  8. Lee S, Abdullah A, Jhanjhi N, Kok S
    PeerJ Comput Sci, 2021;7:e350.
    PMID: 33817000 DOI: 10.7717/peerj-cs.350
    The Industrial Revolution 4.0 began with the breakthrough technological advances in 5G, and artificial intelligence has innovatively transformed the manufacturing industry from digitalization and automation to the new era of smart factories. A smart factory can do not only more than just produce products in a digital and automatic system, but also is able to optimize the production on its own by integrating production with process management, service distribution, and customized product requirement. A big challenge to the smart factory is to ensure that its network security can counteract with any cyber attacks such as botnet and Distributed Denial of Service, They are recognized to cause serious interruption in production, and consequently economic losses for company producers. Among many security solutions, botnet detection using honeypot has shown to be effective in some investigation studies. It is a method of detecting botnet attackers by intentionally creating a resource within the network with the purpose of closely monitoring and acquiring botnet attacking behaviors. For the first time, a proposed model of botnet detection was experimented by combing honeypot with machine learning to classify botnet attacks. A mimicking smart factory environment was created on IoT device hardware configuration. Experimental results showed that the model performance gave a high accuracy of above 96%, with very fast time taken of just 0.1 ms and false positive rate at 0.24127 using random forest algorithm with Weka machine learning program. Hence, the honeypot combined machine learning model in this study was proved to be highly feasible to apply in the security network of smart factory to detect botnet attacks.
  9. Al-Hadi IAA, Sharef NM, Sulaiman MN, Mustapha N, Nilashi M
    PeerJ Comput Sci, 2020;6:e331.
    PMID: 33816980 DOI: 10.7717/peerj-cs.331
    Recommendation systems suggest peculiar products to customers based on their past ratings, preferences, and interests. These systems typically utilize collaborative filtering (CF) to analyze customers' ratings for products within the rating matrix. CF suffers from the sparsity problem because a large number of rating grades are not accurately determined. Various prediction approaches have been used to solve this problem by learning its latent and temporal factors. A few other challenges such as latent feedback learning, customers' drifting interests, overfitting, and the popularity decay of products over time have also been addressed. Existing works have typically deployed either short or long temporal representation for addressing the recommendation system issues. Although each effort improves on the accuracy of its respective benchmark, an integrative solution that could address all the problems without trading off its accuracy is needed. Thus, this paper presents a Latent-based Temporal Optimization (LTO) approach to improve the prediction accuracy of CF by learning the past attitudes of users and their interests over time. Experimental results show that the LTO approach efficiently improves the prediction accuracy of CF compared to the benchmark schemes.
  10. Ali AQ, Md Sultan AB, Abd Ghani AA, Zulzalil H
    PeerJ Comput Sci, 2020;6:e294.
    PMID: 33816945 DOI: 10.7717/peerj-cs.294
    Despite the benefits of standardization, the customization of Software as a Service (SaaS) application is also essential because of the many unique requirements of customers. This study, therefore, focuses on the development of a valid and reliable software customization model for SaaS quality that consists of (1) generic software customization types and a list of common practices for each customization type in the SaaS multi-tenant context, and (2) key quality attributes of SaaS applications associated with customization. The study was divided into three phases: the conceptualization of the model, analysis of its validity using SaaS academic-derived expertise, and evaluation of its reliability by submitting it to an internal consistency reliability test conducted by software-engineer researchers. The model was initially devised based on six customization approaches, 46 customization practices, and 13 quality attributes in the SaaS multi-tenant context. Subsequently, its content was validated over two rounds of testing after which one approach and 14 practices were removed and 20 practices were reformulated. The internal consistency reliability study was thereafter conducted by 34 software engineer researchers. All constructs of the content-validated model were found to be reliable in this study. The final version of the model consists of 6 constructs and 44 items. These six constructs and their associated items are as follows: (1) Configuration (eight items), (2) Composition (four items), (3) Extension (six items), 4) Integration (eight items), (5) Modification (five items), and (6) SaaS quality (13 items). The results of the study may contribute to enhancing the capability of empirically analyzing the impact of software customization on SaaS quality by benefiting from all resultant constructs and items.
  11. Rehman A, Hassan MF, Yew KH, Paputungan I, Tran DC
    PeerJ Comput Sci, 2020;6:e334.
    PMID: 33816982 DOI: 10.7717/peerj-cs.334
    In the near future, the Internet of Vehicles (IoV) is foreseen to become an inviolable part of smart cities. The integration of vehicular ad hoc networks (VANETs) into the IoV is being driven by the advent of the Internet of Things (IoT) and high-speed communication. However, both the technological and non-technical elements of IoV need to be standardized prior to deployment on the road. This study focuses on trust management (TM) in the IoV/VANETs/ITS (intelligent transport system). Trust has always been important in vehicular networks to ensure safety. A variety of techniques for TM and evaluation have been proposed over the years, yet few comprehensive studies that lay the foundation for the development of a "standard" for TM in IoV have been reported. The motivation behind this study is to examine all the TM models available for vehicular networks to bring together all the techniques from previous studies in this review. The study was carried out using a systematic method in which 31 papers out of 256 research publications were screened. An in-depth analysis of all the TM models was conducted and the strengths and weaknesses of each are highlighted. Considering that solutions based on AI are necessary to meet the requirements of a smart city, our second objective is to analyze the implications of incorporating an AI method based on "context awareness" in a vehicular network. It is evident from mobile ad hoc networks (MANETs) that there is potential for context awareness in ad hoc networks. The findings are expected to contribute significantly to the future formulation of IoVITS standards. In addition, gray areas and open questions for new research dimensions are highlighted.
  12. Rahman MA, Muniyandi RC, Albashish D, Rahman MM, Usman OL
    PeerJ Comput Sci, 2021;7:e344.
    PMID: 33816995 DOI: 10.7717/peerj-cs.344
    Artificial neural networks (ANN) perform well in real-world classification problems. In this paper, a robust classification model using ANN was constructed to enhance the accuracy of breast cancer classification. The Taguchi method was used to determine the suitable number of neurons in a single hidden layer of the ANN. The selection of a suitable number of neurons helps to solve the overfitting problem by affecting the classification performance of an ANN. With this, a robust classification model was then built for breast cancer classification. Based on the Taguchi method results, the suitable number of neurons selected for the hidden layer in this study is 15, which was used for the training of the proposed ANN model. The developed model was benchmarked upon the Wisconsin Diagnostic Breast Cancer Dataset, popularly known as the UCI dataset. Finally, the proposed model was compared with seven other existing classification models, and it was confirmed that the model in this study had the best accuracy at breast cancer classification, at 98.8%. This confirmed that the proposed model significantly improved performance.
  13. Yee PL, Mehmood S, Almogren A, Ali I, Anisi MH
    PeerJ Comput Sci, 2020;6:e326.
    PMID: 33816976 DOI: 10.7717/peerj-cs.326
    Opportunistic routing is an emerging routing technology that was proposed to overcome the drawback of unreliable transmission, especially in Wireless Sensor Networks (WSNs). Over the years, many forwarder methods were proposed to improve the performance in opportunistic routing. However, based on existing works, the findings have shown that there is still room for improvement in this domain, especially in the aspects of latency, network lifetime, and packet delivery ratio. In this work, a new relay node selection method was proposed. The proposed method used the minimum or maximum range and optimum energy level to select the best relay node to forward packets to improve the performance in opportunistic routing. OMNeT++ and MiXiM framework were used to simulate and evaluate the proposed method. The simulation settings were adopted based on the benchmark scheme. The evaluation results showed that our proposed method outperforms in the aspect of latency, network lifetime, and packet delivery ratio as compared to the benchmark scheme.
  14. Agbolade O, Nazri A, Yaakob R, Ghani AAA, Cheah YK
    PeerJ Comput Sci, 2020;6:e249.
    PMID: 33816901 DOI: 10.7717/peerj-cs.249
    Over the years, neuroscientists and psychophysicists have been asking whether data acquisition for facial analysis should be performed holistically or with local feature analysis. This has led to various advanced methods of face recognition being proposed, and especially techniques using facial landmarks. The current facial landmark methods in 3D involve a mathematically complex and time-consuming workflow involving semi-landmark sliding tasks. This paper proposes a homologous multi-point warping for 3D facial landmarking, which is verified experimentally on each of the target objects in a given dataset using 500 landmarks (16 anatomical fixed points and 484 sliding semi-landmarks). This is achieved by building a template mesh as a reference object and applying this template to each of the targets in three datasets using an artificial deformation approach. The semi-landmarks are subjected to sliding along tangents to the curves or surfaces until the bending energy between a template and a target form is minimal. The results indicate that our method can be used to investigate shape variation for multiple datasets when implemented on three databases (Stirling, FRGC and Bosphorus).
  15. Lee YY, Abdul Halim Z
    PeerJ Comput Sci, 2020;6:e309.
    PMID: 33816960 DOI: 10.7717/peerj-cs.309
    Stochastic computing (SC) is an alternative computing domain for ubiquitous deterministic computing whereby a single logic gate can perform the arithmetic operation by exploiting the nature of probability math. SC was proposed in the 1960s when binary computing was expensive. However, presently, SC started to regain interest after the widespread of deep learning application, specifically the convolutional neural network (CNN) algorithm due to its practicality in hardware implementation. Although not all computing functions can translate to the SC domain, several useful function blocks related to the CNN algorithm had been proposed and tested by researchers. An evolution of CNN, namely, binarised neural network, had also gained attention in the edge computing due to its compactness and computing efficiency. This study reviews various SC CNN hardware implementation methodologies. Firstly, we review the fundamental concepts of SC and the circuit structure and then compare the advantages and disadvantages amongst different SC methods. Finally, we conclude the overview of SC in CNN and make suggestions for widespread implementation.
  16. Al-Yousif S, Jaenul A, Al-Dayyeni W, Alamoodi A, Jabori I, Md Tahir N, et al.
    PeerJ Comput Sci, 2021;7:e452.
    PMID: 33987454 DOI: 10.7717/peerj-cs.452
    Context: The interpretations of cardiotocography (CTG) tracings are indeed vital to monitor fetal well-being both during pregnancy and childbirth. Currently, many studies are focusing on feature extraction and CTG classification using computer vision approach in determining the most accurate diagnosis as well as monitoring the fetal well-being during pregnancy. Additionally, a fetal monitoring system would be able to perform detection and precise quantification of fetal heart rate patterns.

    Objective: This study aimed to perform a systematic review to describe the achievements made by the researchers, summarizing findings that have been found by previous researchers in feature extraction and CTG classification, to determine criteria and evaluation methods to the taxonomies of the proposed literature in the CTG field and to distinguish aspects from relevant research in the field of CTG.

    Methods: Article search was done systematically using three databases: IEEE Xplore digital library, Science Direct, and Web of Science over a period of 5 years. The literature in the medical sciences and engineering was included in the search selection to provide a broader understanding for researchers.

    Results: After screening 372 articles, and based on our protocol of exclusion and inclusion criteria, for the final set of articles, 50 articles were obtained. The research literature taxonomy was divided into four stages. The first stage discussed the proposed method which presented steps and algorithms in the pre-processing stage, feature extraction and classification as well as their use in CTG (20/50 papers). The second stage included the development of a system specifically on automatic feature extraction and CTG classification (7/50 papers). The third stage consisted of reviews and survey articles on automatic feature extraction and CTG classification (3/50 papers). The last stage discussed evaluation and comparative studies to determine the best method for extracting and classifying features with comparisons based on a set of criteria (20/50 articles).

    Discussion: This study focused more on literature compared to techniques or methods. Also, this study conducts research and identification of various types of datasets used in surveys from publicly available, private, and commercial datasets. To analyze the results, researchers evaluated independent datasets using different techniques.

    Conclusions: This systematic review contributes to understand and have insight into the relevant research in the field of CTG by surveying and classifying pertinent research efforts. This review will help to address the current research opportunities, problems and challenges, motivations, recommendations related to feature extraction and CTG classification, as well as the measurement of various performance and various data sets used by other researchers.

  17. Zulqarnain M, Khalaf Zager Alsaedi A, Ghazali R, Ghouse MG, Sharif W, Aida Husaini N
    PeerJ Comput Sci, 2021;7:e570.
    PMID: 34435091 DOI: 10.7717/peerj-cs.570
    Question classification is one of the essential tasks for automatic question answering implementation in natural language processing (NLP). Recently, there have been several text-mining issues such as text classification, document categorization, web mining, sentiment analysis, and spam filtering that have been successfully achieved by deep learning approaches. In this study, we illustrated and investigated our work on certain deep learning approaches for question classification tasks in an extremely inflected Turkish language. In this study, we trained and tested the deep learning architectures on the questions dataset in Turkish. In addition to this, we used three main deep learning approaches (Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN)) and we also applied two different deep learning combinations of CNN-GRU and CNN-LSTM architectures. Furthermore, we applied the Word2vec technique with both skip-gram and CBOW methods for word embedding with various vector sizes on a large corpus composed of user questions. By comparing analysis, we conducted an experiment on deep learning architectures based on test and 10-cross fold validation accuracy. Experiment results were obtained to illustrate the effectiveness of various Word2vec techniques that have a considerable impact on the accuracy rate using different deep learning approaches. We attained an accuracy of 93.7% by using these techniques on the question dataset.
  18. Ishaq K, Mat Zin NA, Rosdi F, Jehanghir M, Ishaq S, Abid A
    PeerJ Comput Sci, 2021;7:e496.
    PMID: 34084920 DOI: 10.7717/peerj-cs.496
    Learning a new language is a challenging task. In many countries, students are encouraged to learn an international language at school level. In particular, English is the most widely used international language and is being taught at the school level in many countries. The ubiquity and accessibility of smartphones combined with the recent developments in mobile application and gamification in teaching and training have paved the way for experimenting with language learning using mobile phones. This article presents a systematic literature review of the published research work in mobile-assisted language learning. To this end, more than 60 relevant primary studies which have been published in well-reputed venues have been selected for further analysis. The detailed analysis reveals that researchers developed many different simple and gamified mobile applications for learning languages based on various theories, frameworks, and advanced tools. Furthermore, the study also analyses how different applications have been evaluated and tested at different educational levels using different experimental settings while incorporating a variety of evaluation measures. Lastly, a taxonomy has been proposed for the research work in mobile-assisted language learning, which is followed by promising future research challenges in this domain.
  19. Ismail W, Al-Hadi IAA, Grosan C, Hendradi R
    PeerJ Comput Sci, 2021;7:e599.
    PMID: 34322590 DOI: 10.7717/peerj-cs.599
    Background: Virtual reality is utilised in exergames to help patients with disabilities improve on the movement of their limbs. Exergame settings, such as the game difficulty, play important roles in the rehabilitation outcome. Similarly, suboptimal exergames' settings may adversely affect the accuracy of the results obtained. As such, the improvement in patients' movement performances falls below the desired expectations. In this paper, a recommender system is incorporated to suggest the most preferred movement setting for each patient, based on the movement history of the patient.

    Method: The proposed recommender system (ResComS) suggests the most suitable setting necessary to optimally improve patients' rehabilitation performances. In the course of developing the recommender system, three methods are proposed and compared: ReComS (K-nearest neighbours and collaborative filtering algorithms), ReComS+ (k-means, K-nearest neighbours, and collaborative filtering algorithms) and ReComS++ (bacterial foraging optimisation, k-means, K-nearest neighbours, and collaborative filtering algorithms). The experimental datasets are collected using the Medical Interactive Recovery Assistant (MIRA) software platform.

    Result: Experimental results, validated by the patients' exergame performances, reveal that the ReComS++ approach predicts the best exergame settings for patients with 85.76% accuracy.

  20. Dube S, Wong YW, Nugroho H
    PeerJ Comput Sci, 2021;7:e633.
    PMID: 34322595 DOI: 10.7717/peerj-cs.633
    Incremental learning evolves deep neural network knowledge over time by learning continuously from new data instead of training a model just once with all data present before the training starts. However, in incremental learning, new samples are always streaming in whereby the model to be trained needs to continuously adapt to new samples. Images are considered to be high dimensional data and thus training deep neural networks on such data is very time-consuming. Fog computing is a paradigm that uses fog devices to carry out computation near data sources to reduce the computational load on the server. Fog computing allows democracy in deep learning by enabling intelligence at the fog devices, however, one of the main challenges is the high communication costs between fog devices and the centralized servers especially in incremental learning where data samples are continuously arriving and need to be transmitted to the server for training. While working with Convolutional Neural Networks (CNN), we demonstrate a novel data sampling algorithm that discards certain training images per class before training even starts which reduces the transmission cost from the fog device to the server and the model training time while maintaining model learning performance both for static and incremental learning. Results show that our proposed method can effectively perform data sampling regardless of the model architecture, dataset, and learning settings.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links