Displaying publications 21 - 40 of 88 in total

Abstract:
Sort:
  1. Abdullah MA, Ibrahim MAR, Shapiee MNA, Zakaria MA, Mohd Razman MA, Muazu Musa R, et al.
    PeerJ Comput Sci, 2021;7:e680.
    PMID: 34497873 DOI: 10.7717/peerj-cs.680
    This study aims at classifying flat ground tricks, namely Ollie, Kickflip, Shove-it, Nollie and Frontside 180, through the identification of significant input image transformation on different transfer learning models with optimized Support Vector Machine (SVM) classifier. A total of six amateur skateboarders (20 ± 7 years of age with at least 5.0 years of experience) executed five tricks for each type of trick repeatedly on a customized ORY skateboard (IMU sensor fused) on a cemented ground. From the IMU data, a total of six raw signals extracted. A total of two input image type, namely raw data (RAW) and Continous Wavelet Transform (CWT), as well as six transfer learning models from three different families along with grid-searched optimized SVM, were investigated towards its efficacy in classifying the skateboarding tricks. It was shown from the study that RAW and CWT input images on MobileNet, MobileNetV2 and ResNet101 transfer learning models demonstrated the best test accuracy at 100% on the test dataset. Nonetheless, by evaluating the computational time amongst the best models, it was established that the CWT-MobileNet-Optimized SVM pipeline was found to be the best. It could be concluded that the proposed method is able to facilitate the judges as well as coaches in identifying skateboarding tricks execution.
  2. Al-Mashhadi S, Anbar M, Hasbullah I, Alamiedy TA
    PeerJ Comput Sci, 2021;7:e640.
    PMID: 34458571 DOI: 10.7717/peerj-cs.640
    Botnets can simultaneously control millions of Internet-connected devices to launch damaging cyber-attacks that pose significant threats to the Internet. In a botnet, bot-masters communicate with the command and control server using various communication protocols. One of the widely used communication protocols is the 'Domain Name System' (DNS) service, an essential Internet service. Bot-masters utilise Domain Generation Algorithms (DGA) and fast-flux techniques to avoid static blacklists and reverse engineering while remaining flexible. However, botnet's DNS communication generates anomalous DNS traffic throughout the botnet life cycle, and such anomaly is considered an indicator of DNS-based botnets presence in the network. Despite several approaches proposed to detect botnets based on DNS traffic analysis; however, the problem still exists and is challenging due to several reasons, such as not considering significant features and rules that contribute to the detection of DNS-based botnet. Therefore, this paper examines the abnormality of DNS traffic during the botnet lifecycle to extract significant enriched features. These features are further analysed using two machine learning algorithms. The union of the output of two algorithms proposes a novel hybrid rule detection model approach. Two benchmark datasets are used to evaluate the performance of the proposed approach in terms of detection accuracy and false-positive rate. The experimental results show that the proposed approach has a 99.96% accuracy and a 1.6% false-positive rate, outperforming other state-of-the-art DNS-based botnet detection approaches.
  3. Ahmed Al-Hussein W, Mat Kiah ML, Lip Yee P, Zaidan BB
    PeerJ Comput Sci, 2021;7:e632.
    PMID: 34541305 DOI: 10.7717/peerj-cs.632
    In the plan and development of Intelligent Transportation Systems (ITS), understanding drivers behaviour is considered highly valuable. Reckless driving, incompetent preventive measures, and the reliance on slow and incompetent assistance systems are attributed to the increasing rates of traffic accidents. This survey aims to review and scrutinize the literature related to sensor-based driver behaviour domain and to answer questions that are not covered so far by existing reviews. It covers the factors that are required in improving the understanding of various appropriate characteristics of this domain and outlines the common incentives, open confrontations, and imminent commendations from former researchers. Systematic scanning of the literature, from January 2014 to December 2020, mainly from four main databases, namely, IEEEXplore, ScienceDirect, Scopus and Web of Science to locate highly credible peer-reviewed articles. Amongst the 5,962 articles found, a total of 83 articles are selected based on the author's predefined inclusion and exclusion criteria. Then, a taxonomy of existing literature is presented to recognize the various aspects of this relevant research area. Common issues, motivations, and recommendations of previous studies are identified and discussed. Moreover, substantial analysis is performed to identify gaps and weaknesses in current literature and guide future researchers into planning their experiments appropriately. Finally, future directions are provided for researchers interested in driver profiling and recognition. This survey is expected to aid in emphasizing existing research prospects and create further research directions in the near future.
  4. Ali O, Ishak MK, Bhatti MKL
    PeerJ Comput Sci, 2021;7:e659.
    PMID: 34541307 DOI: 10.7717/peerj-cs.659
    Over the last decade, the Internet of Things (IoT) domain has grown dramatically, from ultra-low-power hardware design to cloud-based solutions, and now, with the rise of 5G technology, a new horizon for edge computing on IoT devices will be introduced. A wide range of communication technologies has steadily evolved in recent years, representing a diverse range of domain areas and communication specifications. Because of the heterogeneity of technology and interconnectivity, the true realisation of the IoT ecosystem is currently hampered by multiple dynamic integration challenges. In this context, several emerging IoT domains necessitate a complete re-modeling, design, and standardisation from the ground up in order to achieve seamless IoT ecosystem integration. The Internet of Nano-Things (IoNT), Internet of Space-Things (IoST), Internet of Underwater-Things (IoUT) and Social Internet of Things (SIoT) are investigated in this paper with a broad future scope based on their integration and ability to source other IoT domains by highlighting their application domains, state-of-the-art research, and open challenges. To the best of our knowledge, there is little or no information on the current state of these ecosystems, which is the motivating factor behind this article. Finally, the paper summarises the integration of these ecosystems with current IoT domains and suggests future directions for overcoming the challenges.
  5. Sadiq RB, Safie N, Abd Rahman AH, Goudarzi S
    PeerJ Comput Sci, 2021;7:e661.
    PMID: 34541308 DOI: 10.7717/peerj-cs.661
    Organizations in various industries have widely developed the artificial intelligence (AI) maturity model as a systematic approach. This study aims to review state-of-the-art studies related to AI maturity models systematically. It allows a deeper understanding of the methodological issues relevant to maturity models, especially in terms of the objectives, methods employed to develop and validate the models, and the scope and characteristics of maturity model development. Our analysis reveals that most works concentrate on developing maturity models with or without their empirical validation. It shows that the most significant proportion of models were designed for specific domains and purposes. Maturity model development typically uses a bottom-up design approach, and most of the models have a descriptive characteristic. Besides that, maturity grid and continuous representation with five levels are currently trending in maturity model development. Six out of 13 studies (46%) on AI maturity pertain to assess the technology aspect, even in specific domains. It confirms that organizations still require an improvement in their AI capability and in strengthening AI maturity. This review provides an essential contribution to the evolution of organizations using AI to explain the concepts, approaches, and elements of maturity models.
  6. Goh JY, Khang TF
    PeerJ Comput Sci, 2021;7:e698.
    PMID: 34604523 DOI: 10.7717/peerj-cs.698
    In image analysis, orthogonal moments are useful mathematical transformations for creating new features from digital images. Moreover, orthogonal moment invariants produce image features that are resistant to translation, rotation, and scaling operations. Here, we show the result of a case study in biological image analysis to help researchers judge the potential efficacy of image features derived from orthogonal moments in a machine learning context. In taxonomic classification of forensically important flies from the Sarcophagidae and the Calliphoridae family (n = 74), we found the GUIDE random forests model was able to completely classify samples from 15 different species correctly based on Krawtchouk moment invariant features generated from fly wing images, with zero out-of-bag error probability. For the more challenging problem of classifying breast masses based solely on digital mammograms from the CBIS-DDSM database (n = 1,151), we found that image features generated from the Generalized pseudo-Zernike moments and the Krawtchouk moments only enabled the GUIDE kernel model to achieve modest classification performance. However, using the predicted probability of malignancy from GUIDE as a feature together with five expert features resulted in a reasonably good model that has mean sensitivity of 85%, mean specificity of 61%, and mean accuracy of 70%. We conclude that orthogonal moments have high potential as informative image features in taxonomic classification problems where the patterns of biological variations are not overly complex. For more complicated and heterogeneous patterns of biological variations such as those present in medical images, relying on orthogonal moments alone to reach strong classification performance is unrealistic, but integrating prediction result using them with carefully selected expert features may still produce reasonably good prediction models.
  7. Islam MN, Sulaiman N, Farid FA, Uddin J, Alyami SA, Rashid M, et al.
    PeerJ Comput Sci, 2021;7:e638.
    PMID: 34712786 DOI: 10.7717/peerj-cs.638
    Hearing deficiency is the world's most common sensation of impairment and impedes human communication and learning. Early and precise hearing diagnosis using electroencephalogram (EEG) is referred to as the optimum strategy to deal with this issue. Among a wide range of EEG control signals, the most relevant modality for hearing loss diagnosis is auditory evoked potential (AEP) which is produced in the brain's cortex area through an auditory stimulus. This study aims to develop a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. First, the raw AEP data is transformed into time-frequency images through the wavelet transformation. Then, lower-level functionality is eliminated using a pre-trained network. Here, an improved-VGG16 architecture has been designed based on removing some convolutional layers and adding new layers in the fully connected block. Subsequently, the higher levels of the neural network architecture are fine-tuned using the labelled time-frequency images. Finally, the proposed method's performance has been validated by a reputed publicly available AEP dataset, recorded from sixteen subjects when they have heard specific auditory stimuli in the left or right ear. The proposed method outperforms the state-of-art studies by improving the classification accuracy to 96.87% (from 57.375%), which indicates that the proposed improved-VGG16 architecture can significantly deal with AEP response in early hearing loss diagnosis.
  8. Arshad A, Mohd Hanapi Z, Subramaniam S, Latip R
    PeerJ Comput Sci, 2021;7:e673.
    PMID: 34712787 DOI: 10.7717/peerj-cs.673
    Wireless sensor networks (WSN) have been among the most prevalent wireless innovations over the years exciting new Internet of Things (IoT) applications. IoT based WSN integrated with Internet Protocol IP allows any physical objects with sensors to be connected ubiquitously and send real-time data to the server connected to the Internet gate. Security in WSN remains an ongoing research trend that falls under the IoT paradigm. A WSN node deployed in a hostile environment is likely to open security attacks such as Sybil attack due to its distributed architecture and network contention implemented in the routing protocol. In a Sybil attack, an adversary illegally advertises several false identities or a single identity that may occur at several locations called Sybil nodes. Therefore, in this paper, we give a survey of the most up-to-date assured methods to defend from the Sybil attack. The Sybil attack countermeasures includes encryption, trust, received signal indicator (RSSI), encryption and artificial intelligence. Specifically, we survey different methods, along with their advantages and disadvantages, to mitigate the Sybil attack. We discussed the lesson learned and the future avenues of study and open issues in WSN security analysis.
  9. Alansari Z, Anuar NB, Kamsin A, Belgaum MR
    PeerJ Comput Sci, 2022;8:e1135.
    PMID: 36426265 DOI: 10.7717/peerj-cs.1135
    Wireless sensor networks (WSNs) consist of hundreds, or thousands of sensor nodes distributed over a wide area and used as the Internet of Things (IoT) devices to benefit many home users and autonomous systems industries. With many users adopting WSN-based IoT technology, ensuring that the sensor's information is protected from attacks is essential. Many attacks interrupt WSNs, such as Quality of Service (QoS) attacks, malicious nodes, and routing attacks. To combat these attacks, especially on the routing attacks, we need to detect the attacker nodes and prevent them from any access to WSN. Although some survey studies on routing attacks have been published, a lack of systematic studies on detecting WSN routing attacks can be seen in the literature. This study enhances the topic with a taxonomy of current and emerging detection techniques for routing attacks in wireless sensor networks to improve QoS. This article uses a PRISMA flow diagram for a systematic review of 87 articles from 2016 to 2022 based on eight routing attacks: wormhole, sybil, Grayhole/selective forwarding, blackhole, sinkhole, replay, spoofing, and hello flood attacks. The review also includes an evaluation of the metrics and criteria used to evaluate performance. Researchers can use this article to fill in any information gaps within the WSN routing attack detection domain.
  10. Jusoh R, Firdaus A, Anwar S, Osman MZ, Darmawan MF, Ab Razak MF
    PeerJ Comput Sci, 2021;7:e522.
    PMID: 34825052 DOI: 10.7717/peerj-cs.522
    Android is a free open-source operating system (OS), which allows an in-depth understanding of its architecture. Therefore, many manufacturers are utilizing this OS to produce mobile devices (smartphones, smartwatch, and smart glasses) in different brands, including Google Pixel, Motorola, Samsung, and Sony. Notably, the employment of OS leads to a rapid increase in the number of Android users. However, unethical authors tend to develop malware in the devices for wealth, fame, or private purposes. Although practitioners conduct intrusion detection analyses, such as static analysis, there is an inadequate number of review articles discussing the research efforts on this type of analysis. Therefore, this study discusses the articles published from 2009 until 2019 and analyses the steps in the static analysis (reverse engineer, features, and classification) with taxonomy. Following that, the research issue in static analysis is also highlighted. Overall, this study serves as the guidance for novice security practitioners and expert researchers in the proposal of novel research to detect malware through static analysis.
  11. Farid M, Latip R, Hussin M, Abdul Hamid NAW
    PeerJ Comput Sci, 2021;7:e747.
    PMID: 34805503 DOI: 10.7717/peerj-cs.747
    Background: Recent technological developments have enabled the execution of more scientific solutions on cloud platforms. Cloud-based scientific workflows are subject to various risks, such as security breaches and unauthorized access to resources. By attacking side channels or virtual machines, attackers may destroy servers, causing interruption and delay or incorrect output. Although cloud-based scientific workflows are often used for vital computational-intensive tasks, their failure can come at a great cost.

    Methodology: To increase workflow reliability, we propose the Fault and Intrusion-tolerant Workflow Scheduling algorithm (FITSW). The proposed workflow system uses task executors consisting of many virtual machines to carry out workflow tasks. FITSW duplicates each sub-task three times, uses an intermediate data decision-making mechanism, and then employs a deadline partitioning method to determine sub-deadlines for each sub-task. This way, dynamism is achieved in task scheduling using the resource flow. The proposed technique generates or recycles task executors, keeps the workflow clean, and improves efficiency. Experiments were conducted on WorkflowSim to evaluate the effectiveness of FITSW using metrics such as task completion rate, success rate and completion time.

    Results: The results show that FITSW not only raises the success rate by about 12%, it also improves the task completion rate by 6.2% and minimizes the completion time by about 15.6% in comparison with intrusion tolerant scientific workflow ITSW system.

  12. Lakhan A, Abed Mohammed M, Kadry S, Hameed Abdulkareem K, Taha Al-Dhief F, Hsu CH
    PeerJ Comput Sci, 2021;7:e758.
    PMID: 34901423 DOI: 10.7717/peerj-cs.758
    The intelligent reflecting surface (IRS) is a ground-breaking technology that can boost the efficiency of wireless data transmission systems. Specifically, the wireless signal transmitting environment is reconfigured by adjusting a large number of small reflecting units simultaneously. Therefore, intelligent reflecting surface (IRS) has been suggested as a possible solution for improving several aspects of future wireless communication. However, individual nodes are empowered in IRS, but decisions and learning of data are still made by the centralized node in the IRS mechanism. Whereas, in previous works, the problem of energy-efficient and delayed awareness learning IRS-assisted communications has been largely overlooked. The federated learning aware Intelligent Reconfigurable Surface Task Scheduling schemes (FL-IRSTS) algorithm is proposed in this paper to achieve high-speed communication with energy and delay efficient offloading and scheduling. The training of models is divided into different nodes. Therefore, the trained model will decide the IRSTS configuration that best meets the goals in terms of communication rate. Multiple local models trained with the local healthcare fog-cloud network for each workload using federated learning (FL) to generate a global model. Then, each trained model shared its initial configuration with the global model for the next training round. Each application's healthcare data is handled and processed locally during the training process. Simulation results show that the proposed algorithm's achievable rate output can effectively approach centralized machine learning (ML) while meeting the study's energy and delay objectives.
  13. Sameer Sadeq A, Hassan R, Hafizah Mohd Aman A, Sallehudin H, Allehaibi K, Albogami N, et al.
    PeerJ Comput Sci, 2021;7:e733.
    PMID: 34901420 DOI: 10.7717/peerj-cs.733
    The development of Medium Access Control (MAC) protocols for Internet of Things should consider various aspects such as energy saving, scalability for a wide number of nodes, and grouping awareness. Although numerous protocols consider these aspects in the limited view of handling the medium access, the proposed Grouping MAC (GMAC) exploits prior knowledge of geographic node distribution in the environment and their priority levels. Such awareness enables GMAC to significantly reduce the number of collisions and prolong the network lifetime. GMAC is developed on the basis of five cycles that manage data transmission between sensors and cluster head and between cluster head and sink. These two stages of communication increase the efficiency of energy consumption for transmitting packets. In addition, GMAC contains slot decomposition and assignment based on node priority, and, therefore, is a grouping-aware protocol. Compared with standard benchmarks IEEE 802.15.4 and industrial automation standard 100.11a and user-defined grouping, GMAC protocols generate a Packet Delivery Ratio (PDR) higher than 90%, whereas the PDR of benchmark is as low as 75% in some scenarios and 30% in others. In addition, the GMAC accomplishes lower end-to-end (e2e) delay than the least e2e delay of IEEE with a difference of 3 s. Regarding energy consumption, the consumed energy is 28.1 W/h for GMAC-IEEE Energy Aware (EA) and GMAC-IEEE, which is less than that for IEEE 802.15.4 (578 W/h) in certain scenarios.
  14. Khalid H, Hashim SJ, Ahmad SMS, Hashim F, Akmal Chaudhary M
    PeerJ Comput Sci, 2021;7:e714.
    PMID: 34977343 DOI: 10.7717/peerj-cs.714
    In heterogeneous wireless networks, the industrial Internet of Things (IIoT) is an essential contributor to increasing productivity and effectiveness. However, in various domains, such as industrial wireless scenarios, small cell domains, and vehicular ad hoc networks, an efficient and stable authentication algorithm is required (VANET). Specifically, IoT vehicles deal with vast amounts of data transmitted between VANET entities in different domains in such a large-scale environment. Also, crossing from one territory to another may have the connectivity services down for a while, leading to service interruption because it is pervasive in remote areas and places with multipath obstructions. Hence, it is vulnerable to specific attacks (e.g., replay attacks, modification attacks, man-in-the-middle attacks, and insider attacks), making the system inefficient. Also, high processing data increases the computation and communication cost, leading to an increased workload in the system. Thus, to solve the above issues, we propose an online/offline lightweight authentication scheme for the VANET cross-domain system in IIoT to improve the security and efficiency of the VANET. The proposed scheme utilizes an efficient AES-RSA algorithm to achieve integrity and confidentiality of the message. The offline joining is added to avoid remote network intrusions and the risk of network service interruptions. The proposed work includes two different significant goals to achieve first, then secure message on which the data is transmitted and efficiency in a cryptographic manner. The Burrows Abdi Needham (BAN logic) logic is used to prove that this scheme is mutually authenticated. The system's security has been tested using the well-known AVISPA tool to evaluate and verify its security formally. The results show that the proposed scheme outperforms the ID-CPPA, AAAS, and HCDA schemes by 53%, 55%, and 47% respectively in terms of computation cost, and 65%, 83%, and 40% respectively in terms of communication cost.
  15. Mad Yusoh SS, Abd Wahab D, Adil Habeeb H, Azman AH
    PeerJ Comput Sci, 2021;7:e808.
    PMID: 34977355 DOI: 10.7717/peerj-cs.808
    The conventional component repair in remanufacturing involves human decision making that is influenced by several factors such as conditions of incoming cores, modes of failure, severity of damage, features and geometric complexities of cores and types of reparation required. Repair can be enhanced through automation using additive manufacturing (AM) technology. Advancements in AM have led to the development of directed energy deposition and laser cladding technology for repair of damaged parts and components. The objective of this systematic literature review is to ascertain how intelligent systems can be integrated into AM-based repair, through artificial intelligence (AI) approaches capable of supporting the nature and process of decision making during repair. The integration of intelligent systems in AM repair is expected to enhance resource utilization and repair efficiency during remanufacturing. Based on a systematic literature review of articles published during 2005-2021, the study analyses the activities of conventional repair in remanufacturing, trends in the applications of AM for repair using the current state-of-the-art technology and how AI has been deployed to facilitate repair. The study concludes with suggestions on research areas and opportunities that will further enhance the automation of component repair during remanufacturing using intelligent AM systems.
  16. Dashti M, Javdani Gandomani T, Hasanpoor Adeh D, Zulzalil H, Md Sultan AB
    PeerJ Comput Sci, 2022;8:e800.
    PMID: 35111910 DOI: 10.7717/peerj-cs.800
    One of the most important and critical factors in software projects is the proper cost estimation. This activity, which has to be done prior to the beginning of a project in the initial stage, always encounters several challenges and problems. However, due to the high significance and impact of the proper cost estimation, several approaches and methods have been proposed regarding how to perform cost estimation, in which the analogy-based approach is one of the most popular ones. In recent years, many attempts have been made to employ suitable techniques and methods in this approach in order to improve estimation accuracy. However, achieving improved estimation accuracy in these techniques is still an appropriate research topic. To improve software development cost estimation, the current study has investigated the effect of the LEM algorithm on optimization of features weighting and proposed a new method as well. In this research, the effectiveness of this algorithm has been examined on two datasets, Desharnais and Maxwell. Then, MMRE, PRED (0.25), and MdMRE criteria have been used to evaluate and compare the proposed method against other evolutionary algorithms. Employing the proposed method showed considerable improvement in estimating software cost estimation.
  17. Aboaoja FA, Zainal A, Ghaleb FA, Alghamdi NS, Saeed F, Alhuwayji H
    PeerJ Comput Sci, 2023;9:e1492.
    PMID: 37810364 DOI: 10.7717/peerj-cs.1492
    BACKGROUND: Malware, malicious software, is the major security concern of the digital realm. Conventional cyber-security solutions are challenged by sophisticated malicious behaviors. Currently, an overlap between malicious and legitimate behaviors causes more difficulties in characterizing those behaviors as malicious or legitimate activities. For instance, evasive malware often mimics legitimate behaviors, and evasion techniques are utilized by legitimate and malicious software.

    PROBLEM: Most of the existing solutions use the traditional term of frequency-inverse document frequency (TF-IDF) technique or its concept to represent malware behaviors. However, the traditional TF-IDF and the developed techniques represent the features, especially the shared ones, inaccurately because those techniques calculate a weight for each feature without considering its distribution in each class; instead, the generated weight is generated based on the distribution of the feature among all the documents. Such presumption can reduce the meaning of those features, and when those features are used to classify malware, they lead to a high false alarms.

    METHOD: This study proposes a Kullback-Liebler Divergence-based Term Frequency-Probability Class Distribution (KLD-based TF-PCD) algorithm to represent the extracted features based on the differences between the probability distributions of the terms in malware and benign classes. Unlike the existing solution, the proposed algorithm increases the weights of the important features by using the Kullback-Liebler Divergence tool to measure the differences between their probability distributions in malware and benign classes.

    RESULTS: The experimental results show that the proposed KLD-based TF-PCD algorithm achieved an accuracy of 0.972, the false positive rate of 0.037, and the F-measure of 0.978. Such results were significant compared to the related work studies. Thus, the proposed KLD-based TF-PCD algorithm contributes to improving the security of cyberspace.

    CONCLUSION: New meaningful characteristics have been added by the proposed algorithm to promote the learned knowledge of the classifiers, and thus increase their ability to classify malicious behaviors accurately.

  18. Alkhazi B, Alipour A
    PeerJ Comput Sci, 2023;9:e1587.
    PMID: 37869450 DOI: 10.7717/peerj-cs.1587
    The ability to create decentralized applications without the authority of a single entity has attracted numerous developers to build applications using blockchain technology. However, ensuring the correctness of such applications poses significant challenges, as it can result in financial losses or, even worse, a loss of user trust. Testing smart contracts introduces a unique set of challenges due to the additional restrictions and costs imposed by blockchain platforms during test case execution. Therefore, it remains uncertain whether testing techniques developed for traditional software can effectively be adapted to smart contracts. In this study, we propose a multi-objective test selection technique for smart contracts that aims to balance three objectives: time, coverage, and gas usage. We evaluated our approach using a comprehensive selection of real-world smart contracts and compared the results with various test selection methods employed in traditional software systems. Statistical analysis of our experiments, which utilized benchmark Solidity smart contract case studies, demonstrates that our approach significantly reduces the testing cost while still maintaining acceptable fault detection capabilities. This is in comparison to random search, mono-objective search, and the traditional re-testing method that does not employ heuristic search.
  19. Zhang XY, Abd Rahman AH, Qamar F
    PeerJ Comput Sci, 2023;9:e1628.
    PMID: 37869467 DOI: 10.7717/peerj-cs.1628
    Simultaneous localization and mapping (SLAM) is a fundamental problem in robotics and computer vision. It involves the task of a robot or an autonomous system navigating an unknown environment, simultaneously creating a map of the surroundings, and accurately estimating its position within that map. While significant progress has been made in SLAM over the years, challenges still need to be addressed. One prominent issue is robustness and accuracy in dynamic environments, which can cause uncertainties and errors in the estimation process. Traditional methods using temporal information to differentiate static and dynamic objects have limitations in accuracy and applicability. Nowadays, many research trends have leaned towards utilizing deep learning-based methods which leverage the capabilities to handle dynamic objects, semantic segmentation, and motion estimation, aiming to improve accuracy and adaptability in complex scenes. This article proposed an approach to enhance monocular visual odometry's robustness and precision in dynamic environments. An enhanced algorithm using the semantic segmentation algorithm DeeplabV3+ is used to identify dynamic objects in the image and then apply the motion consistency check to remove feature points belonging to dynamic objects. The remaining static feature points are then used for feature matching and pose estimation based on ORB-SLAM2 using the Technical University of Munich (TUM) dataset. Experimental results show that our method outperforms traditional visual odometry methods in accuracy and robustness, especially in dynamic environments. By eliminating the influence of moving objects, our method improves the accuracy and robustness of visual odometry in dynamic environments. Compared to the traditional ORB-SLAM2, the results show that the system significantly reduces the absolute trajectory error and the relative pose error in dynamic scenes. Our approach has significantly improved the accuracy and robustness of the SLAM system's pose estimation.
  20. Dobrojevic M, Zivkovic M, Chhabra A, Sani NS, Bacanin N, Mohd Amin M
    PeerJ Comput Sci, 2023;9:e1405.
    PMID: 37409075 DOI: 10.7717/peerj-cs.1405
    An ever increasing number of electronic devices integrated into the Internet of Things (IoT) generates vast amounts of data, which gets transported via network and stored for further analysis. However, besides the undisputed advantages of this technology, it also brings risks of unauthorized access and data compromise, situations where machine learning (ML) and artificial intelligence (AI) can help with detection of potential threats, intrusions and automation of the diagnostic process. The effectiveness of the applied algorithms largely depends on the previously performed optimization, i.e., predetermined values of hyperparameters and training conducted to achieve the desired result. Therefore, to address very important issue of IoT security, this article proposes an AI framework based on the simple convolutional neural network (CNN) and extreme machine learning machine (ELM) tuned by modified sine cosine algorithm (SCA). Not withstanding that many methods for addressing security issues have been developed, there is always a possibility for further improvements and proposed research tried to fill in this gap. The introduced framework was evaluated on two ToN IoT intrusion detection datasets, that consist of the network traffic data generated in Windows 7 and Windows 10 environments. The analysis of the results suggests that the proposed model achieved superior level of classification performance for the observed datasets. Additionally, besides conducting rigid statistical tests, best derived model is interpreted by SHapley Additive exPlanations (SHAP) analysis and results findings can be used by security experts to further enhance security of IoT systems.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links