Displaying publications 1 - 20 of 28 in total

Abstract:
Sort:
  1. Zhang H, Feng Y, Wang L
    Comput Intell Neurosci, 2022;2022:3948221.
    PMID: 35909867 DOI: 10.1155/2022/3948221
    With the rapid development of image video and tourism economy, tourism economic data are gradually becoming big data. Therefore, how to schedule between data has become a hot topic. This paper first summarizes the research results on image video, cloud computing, tourism economy, and data scheduling algorithms. Secondly, the origin, structure, development, and service types of cloud computing are expounded in detail. And in order to solve the problem of tourism economic data scheduling, this paper regards the completion time and cross-node transmission delay as the constraints of tourism economic data scheduling. The constraint model of data scheduling is established, the fitness function is improved on the basis of an artificial immune algorithm combined with the constraint model, and the directional recombination of excellent antibodies is carried out by using the advantages of gene recombination so as to obtain the optimal solution to the problem more appropriately. When the resource node scale is 100, the response time of EDSA is 107.92 seconds.
    Matched MeSH terms: Cloud Computing*
  2. Salih S, Hamdan M, Abdelmaboud A, Abdelaziz A, Abdelsalam S, Althobaiti MM, et al.
    Sensors (Basel), 2021 Dec 15;21(24).
    PMID: 34960483 DOI: 10.3390/s21248391
    Cloud ERP is a type of enterprise resource planning (ERP) system that runs on the vendor's cloud platform instead of an on-premises network, enabling companies to connect through the Internet. The goal of this study was to rank and prioritise the factors driving cloud ERP adoption by organisations and to identify the critical issues in terms of security, usability, and vendors that impact adoption of cloud ERP systems. The assessment of critical success factors (CSFs) in on-premises ERP adoption and implementation has been well documented; however, no previous research has been carried out on CSFs in cloud ERP adoption. Therefore, the contribution of this research is to provide research and practice with the identification and analysis of 16 CSFs through a systematic literature review, where 73 publications on cloud ERP adoption were assessed from a range of different conferences and journals, using inclusion and exclusion criteria. Drawing from the literature, we found security, usability, and vendors were the top three most widely cited critical issues for the adoption of cloud-based ERP; hence, the second contribution of this study was an integrative model constructed with 12 drivers based on the security, usability, and vendor characteristics that may have greater influence as the top critical issues in the adoption of cloud ERP systems. We also identified critical gaps in current research, such as the inconclusiveness of findings related to security critical issues, usability critical issues, and vendor critical issues, by highlighting the most important drivers influencing those issues in cloud ERP adoption and the lack of discussion on the nature of the criticality of those CSFs. This research will aid in the development of new strategies or the revision of existing strategies and polices aimed at effectively integrating cloud ERP into cloud computing infrastructure. It will also allow cloud ERP suppliers to determine organisations' and business owners' expectations and implement appropriate tactics. A better understanding of the CSFs will narrow the field of failure and assist practitioners and managers in increasing their chances of success.
    Matched MeSH terms: Cloud Computing*
  3. Abdullah MFA, Yogarayan S, Abdul Razak SF, Azman A, Muhamad Amin AH, Salleh M
    F1000Res, 2021;10:1104.
    PMID: 38595984 DOI: 10.12688/f1000research.73269.4
    Vehicle to Everything (V2X) communications and services have sparked considerable interest as a potential component of future Intelligent Transportation Systems. V2X serves to organise communication and interaction between vehicle to vehicle (V2V), vehicle to infrastructure (V2I), vehicle to pedestrians (V2P), and vehicle to networks (V2N). However, having multiple communication channels can generate a vast amount of data for processing and distribution. In addition, V2X services may be subject to performance requirements relating to dynamic handover and low latency communication channels. Good throughput, lower delay, and reliable packet delivery are the core requirements for V2X services.  Edge Computing (EC) may be a feasible option to address the challenge of dynamic handover and low latency to allow V2X information to be transmitted across vehicles. Currently, existing comparative studies do not cover the applicability of EC for V2X. This review explores EC approaches to determine the relevance for V2X communication and services. EC allows devices to carry out part or all of the data processing at the point where data is collected. The emphasis of this review is on several methods identified in the literature for implementing effective EC. We describe each method individually and compare them according to their applicability. The findings of this work indicate that most methods can simulate the EC positioning under predefined scenarios. These include the use of Mobile Edge Computing, Cloudlet, and Fog Computing. However, since most studies are carried out using simulation tools, there is a potential limitation in that crucial data in the search for EC positioning may be overlooked and ignored for bandwidth reduction. The EC approaches considered in this work are limited to the literature on the successful implementation of V2X communication and services. The outcome of this work could considerably help other researchers better characterise EC applicability for V2X communications and services.
    Matched MeSH terms: Cloud Computing*
  4. Ahmad Z, Jehangiri AI, Ala'anzy MA, Othman M, Umar AI
    Sensors (Basel), 2021 Oct 30;21(21).
    PMID: 34770545 DOI: 10.3390/s21217238
    Cloud computing is a fully fledged, matured and flexible computing paradigm that provides services to scientific and business applications in a subscription-based environment. Scientific applications such as Montage and CyberShake are organized scientific workflows with data and compute-intensive tasks and also have some special characteristics. These characteristics include the tasks of scientific workflows that are executed in terms of integration, disintegration, pipeline, and parallelism, and thus require special attention to task management and data-oriented resource scheduling and management. The tasks executed during pipeline are considered as bottleneck executions, the failure of which result in the wholly futile execution, which requires a fault-tolerant-aware execution. The tasks executed during parallelism require similar instances of cloud resources, and thus, cluster-based execution may upgrade the system performance in terms of make-span and execution cost. Therefore, this research work presents a cluster-based, fault-tolerant and data-intensive (CFD) scheduling for scientific applications in cloud environments. The CFD strategy addresses the data intensiveness of tasks of scientific workflows with cluster-based, fault-tolerant mechanisms. The Montage scientific workflow is considered as a simulation and the results of the CFD strategy were compared with three well-known heuristic scheduling policies: (a) MCT, (b) Max-min, and (c) Min-min. The simulation results showed that the CFD strategy reduced the make-span by 14.28%, 20.37%, and 11.77%, respectively, as compared with the existing three policies. Similarly, the CFD reduces the execution cost by 1.27%, 5.3%, and 2.21%, respectively, as compared with the existing three policies. In case of the CFD strategy, the SLA is not violated with regard to time and cost constraints, whereas it is violated by the existing policies numerous times.
    Matched MeSH terms: Cloud Computing*
  5. Abd Elaziz M, Abualigah L, Ibrahim RA, Attiya I
    Comput Intell Neurosci, 2021;2021:9114113.
    PMID: 34976046 DOI: 10.1155/2021/9114113
    Instead of the cloud, the Internet of things (IoT) activities are offloaded into fog computing to boost the quality of services (QoSs) needed by many applications. However, the availability of continuous computing resources on fog computing servers is one of the restrictions for IoT applications since transmitting the large amount of data generated using IoT devices would create network traffic and cause an increase in computational overhead. Therefore, task scheduling is the main problem that needs to be solved efficiently. This study proposes an energy-aware model using an enhanced arithmetic optimization algorithm (AOA) method called AOAM, which addresses fog computing's job scheduling problem to maximize users' QoSs by maximizing the makespan measure. In the proposed AOAM, we enhanced the conventional AOA searchability using the marine predators algorithm (MPA) search operators to address the diversity of the used solutions and local optimum problems. The proposed AOAM is validated using several parameters, including various clients, data centers, hosts, virtual machines, tasks, and standard evaluation measures, including the energy and makespan. The obtained results are compared with other state-of-the-art methods; it showed that AOAM is promising and solved task scheduling effectively compared with the other comparative methods.
    Matched MeSH terms: Cloud Computing
  6. Madni SHH, Abd Latiff MS, Abdullahi M, Abdulhamid SM, Usman MJ
    PLoS One, 2017;12(5):e0176321.
    PMID: 28467505 DOI: 10.1371/journal.pone.0176321
    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.
    Matched MeSH terms: Cloud Computing*
  7. Mutlag AA, Ghani MKA, Mohammed MA, Lakhan A, Mohd O, Abdulkareem KH, et al.
    Sensors (Basel), 2021 Oct 19;21(20).
    PMID: 34696135 DOI: 10.3390/s21206923
    In the last decade, the developments in healthcare technologies have been increasing progressively in practice. Healthcare applications such as ECG monitoring, heartbeat analysis, and blood pressure control connect with external servers in a manner called cloud computing. The emerging cloud paradigm offers different models, such as fog computing and edge computing, to enhance the performances of healthcare applications with minimum end-to-end delay in the network. However, many research challenges exist in the fog-cloud enabled network for healthcare applications. Therefore, in this paper, a Critical Healthcare Task Management (CHTM) model is proposed and implemented using an ECG dataset. We design a resource scheduling model among fog nodes at the fog level. A multi-agent system is proposed to provide the complete management of the network from the edge to the cloud. The proposed model overcomes the limitations of providing interoperability, resource sharing, scheduling, and dynamic task allocation to manage critical tasks significantly. The simulation results show that our model, in comparison with the cloud, significantly reduces the network usage by 79%, the response time by 90%, the network delay by 65%, the energy consumption by 81%, and the instance cost by 80%.
    Matched MeSH terms: Cloud Computing*
  8. Bukhari MM, Ghazal TM, Abbas S, Khan MA, Farooq U, Wahbah H, et al.
    Comput Intell Neurosci, 2022;2022:3606068.
    PMID: 35126487 DOI: 10.1155/2022/3606068
    Smart applications and intelligent systems are being developed that are self-reliant, adaptive, and knowledge-based in nature. Emergency and disaster management, aerospace, healthcare, IoT, and mobile applications, among them, revolutionize the world of computing. Applications with a large number of growing devices have transformed the current design of centralized cloud impractical. Despite the use of 5G technology, delay-sensitive applications and cloud cannot go parallel due to exceeding threshold values of certain parameters like latency, bandwidth, response time, etc. Middleware proves to be a better solution to cope up with these issues while satisfying the high requirements task offloading standards. Fog computing is recommended middleware in this research article in view of the fact that it provides the services to the edge of the network; delay-sensitive applications can be entertained effectively. On the contrary, fog nodes contain a limited set of resources that may not process all tasks, especially of computation-intensive applications. Additionally, fog is not the replacement of the cloud, rather supplement to the cloud, both behave like counterparts and offer their services correspondingly to compliance the task needs but fog computing has relatively closer proximity to the devices comparatively cloud. The problem arises when a decision needs to take what is to be offloaded: data, computation, or application, and more specifically where to offload: either fog or cloud and how much to offload. Fog-cloud collaboration is stochastic in terms of task-related attributes like task size, duration, arrival rate, and required resources. Dynamic task offloading becomes crucial in order to utilize the resources at fog and cloud to improve QoS. Since this formation of task offloading policy is a bit complex in nature, this problem is addressed in the research article and proposes an intelligent task offloading model. Simulation results demonstrate the authenticity of the proposed logistic regression model acquiring 86% accuracy compared to other algorithms and confidence in the predictive task offloading policy by making sure process consistency and reliability.
    Matched MeSH terms: Cloud Computing*
  9. Meri A, Hasan MK, Dauwed M, Jarrar M, Aldujaili A, Al-Bsheish M, et al.
    PLoS One, 2023;18(8):e0290654.
    PMID: 37624836 DOI: 10.1371/journal.pone.0290654
    The need for cloud services has been raised globally to provide a platform for healthcare providers to efficiently manage their citizens' health records and thus provide treatment remotely. In Iraq, the healthcare records of public hospitals are increasing progressively with poor digital management. While recent works indicate cloud computing as a platform for all sectors globally, a lack of empirical evidence demands a comprehensive investigation to identify the significant factors that influence the utilization of cloud health computing. Here we provide a cost-effective, modular, and computationally efficient model of utilizing cloud computing based on the organization theory and the theory of reasoned action perspectives. A total of 105 key informant data were further analyzed. The partial least square structural equation modeling was used for data analysis to explore the effect of organizational structure variables on healthcare information technicians' behaviors to utilize cloud services. Empirical results revealed that Internet networks, software modularity, hardware modularity, and training availability significantly influence information technicians' behavioral control and confirmation. Furthermore, these factors positively impacted their utilization of cloud systems, while behavioral control had no significant effect. The importance-performance map analysis further confirms that these factors exhibit high importance in shaping user utilization. Our findings can provide a comprehensive and unified guide to policymakers in the healthcare industry by focusing on the significant factors in organizational and behavioral contexts to engage health information technicians in the development and implementation phases.
    Matched MeSH terms: Cloud Computing*
  10. Aldeen YA, Salleh M, Aljeroudi Y
    J Biomed Inform, 2016 08;62:107-16.
    PMID: 27369566 DOI: 10.1016/j.jbi.2016.06.011
    Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation.
    Matched MeSH terms: Cloud Computing*
  11. Alnajrani HM, Norman AA, Ahmed BH
    PLoS One, 2020;15(6):e0234312.
    PMID: 32525944 DOI: 10.1371/journal.pone.0234312
    As a result of a shift in the world of technology, the combination of ubiquitous mobile networks and cloud computing produced the mobile cloud computing (MCC) domain. As a consequence of a major concern of cloud users, privacy and data protection are getting substantial attention in the field. Currently, a considerable number of papers have been published on MCC with a growing interest in privacy and data protection. Along with this advance in MCC, however, no specific investigation highlights the results of the existing studies in privacy and data protection. In addition, there are no particular exploration highlights trends and open issues in the domain. Accordingly, the objective of this paper is to highlight the results of existing primary studies published in privacy and data protection in MCC to identify current trends and open issues. In this investigation, a systematic mapping study was conducted with a set of six research questions. A total of 1711 studies published from 2009 to 2019 were obtained. Following a filtering process, a collection of 74 primary studies were selected. As a result, the present data privacy threats, attacks, and solutions were identified. Also, the ongoing trends of data privacy exercise were observed. Moreover, the most utilized measures, research type, and contribution type facets were emphasized. Additionally, the current open research issues in privacy and data protection in MCC were highlighted. Furthermore, the results demonstrate the current state-of-the-art of privacy and data protection in MCC, and the conclusion will help to identify research trends and open issues in MCC for researchers and offer useful information in MCC for practitioners.
    Matched MeSH terms: Cloud Computing*
  12. Ahmed AA, Xue Li C
    J Forensic Sci, 2018 Jan;63(1):112-121.
    PMID: 28397244 DOI: 10.1111/1556-4029.13506
    Cloud storage service allows users to store their data online, so that they can remotely access, maintain, manage, and back up data from anywhere via the Internet. Although helpful, this storage creates a challenge to digital forensic investigators and practitioners in collecting, identifying, acquiring, and preserving evidential data. This study proposes an investigation scheme for analyzing data remnants and determining probative artifacts in a cloud environment. Using pCloud as a case study, this research collected the data remnants available on end-user device storage following the storing, uploading, and accessing of data in the cloud storage. Data remnants are collected from several sources, including client software files, directory listing, prefetch, registry, network PCAP, browser, and memory and link files. Results demonstrate that the collected remnants data are beneficial in determining a sufficient number of artifacts about the investigated cybercrime.
    Matched MeSH terms: Cloud Computing
  13. Yıldırım Ö, Pławiak P, Tan RS, Acharya UR
    Comput Biol Med, 2018 11 01;102:411-420.
    PMID: 30245122 DOI: 10.1016/j.compbiomed.2018.09.009
    This article presents a new deep learning approach for cardiac arrhythmia (17 classes) detection based on long-duration electrocardiography (ECG) signal analysis. Cardiovascular disease prevention is one of the most important tasks of any health care system as about 50 million people are at risk of heart disease in the world. Although automatic analysis of ECG signal is very popular, current methods are not satisfactory. The goal of our research was to design a new method based on deep learning to efficiently and quickly classify cardiac arrhythmias. Described research are based on 1000 ECG signal fragments from the MIT - BIH Arrhythmia database for one lead (MLII) from 45 persons. Approach based on the analysis of 10-s ECG signal fragments (not a single QRS complex) is applied (on average, 13 times less classifications/analysis). A complete end-to-end structure was designed instead of the hand-crafted feature extraction and selection used in traditional methods. Our main contribution is to design a new 1D-Convolutional Neural Network model (1D-CNN). The proposed method is 1) efficient, 2) fast (real-time classification) 3) non-complex and 4) simple to use (combined feature extraction and selection, and classification in one stage). Deep 1D-CNN achieved a recognition overall accuracy of 17 cardiac arrhythmia disorders (classes) at a level of 91.33% and classification time per single sample of 0.015 s. Compared to the current research, our results are one of the best results to date, and our solution can be implemented in mobile devices and cloud computing.
    Matched MeSH terms: Cloud Computing
  14. Al-Absi AA, Al-Sammarraie NA, Shaher Yafooz WM, Kang DK
    Biomed Res Int, 2018;2018:7501042.
    PMID: 30417014 DOI: 10.1155/2018/7501042
    MapReduce is the preferred cloud computing framework used in large data analysis and application processing. MapReduce frameworks currently in place suffer performance degradation due to the adoption of sequential processing approaches with little modification and thus exhibit underutilization of cloud resources. To overcome this drawback and reduce costs, we introduce a Parallel MapReduce (PMR) framework in this paper. We design a novel parallel execution strategy of Map and Reduce worker nodes. Our strategy enables further performance improvement and efficient utilization of cloud resources execution of Map and Reduce functions to utilize multicore environments available with computing nodes. We explain in detail makespan modeling and working principle of the PMR framework in the paper. Performance of PMR is compared with Hadoop through experiments considering three biomedical applications. Experiments conducted for BLAST, CAP3, and DeepBind biomedical applications report makespan time reduction of 38.92%, 18.00%, and 34.62% considering the PMR framework against Hadoop framework. Experiments' results prove that the PMR cloud computing platform proposed is robust, cost-effective, and scalable, which sufficiently supports diverse applications on public and private cloud platforms. Consequently, overall presentation and results indicate that there is good matching between theoretical makespan modeling presented and experimental values investigated.
    Matched MeSH terms: Cloud Computing
  15. Paul A, K S V, Sood A, Bhaumik S, Singh KA, Sethupathi S, et al.
    Bull Environ Contam Toxicol, 2022 Dec 13;110(1):7.
    PMID: 36512073 DOI: 10.1007/s00128-022-03638-9
    Presence of suspended particulate matter (SPM) in a waterbody or a river can be caused by multiple parameters such as other pollutants by the discharge of poorly maintained sewage, siltation, sedimentation, flood and even bacteria. In this study, remote sensing techniques were used to understand the effects of pandemic-induced lockdown on the SPM concentration in the lower Tapi reservoir or Ukai reservoir. The estimation was done using Landsat-8 OLI (Operational Land Imager) having radiometric resolution (12-bit) and a spatial resolution of 30 m. The Google Earth Engine (GEE) cloud computing platform was used in this study to generate the products. The GEE is a semi-automated workflow system using a robust approach designed for scientific analysis and visualization of geospatial datasets. An algorithm was deployed, and a time-series (2013-2020) analysis was done for the study area. It was found that the average mean value of SPM in Tapi River during 2020 is lowest than the last seven years at the same time.
    Matched MeSH terms: Cloud Computing
  16. Saif Y, Yusof Y, Rus AZM, Ghaleb AM, Mejjaouli S, Al-Alimi S, et al.
    PLoS One, 2023;18(10):e0292814.
    PMID: 37831665 DOI: 10.1371/journal.pone.0292814
    In the context of Industry 4.0, manufacturing metrology is crucial for inspecting and measuring machines. The Internet of Things (IoT) technology enables seamless communication between advanced industrial devices through local and cloud computing servers. This study investigates the use of the MQTT protocol to enhance the performance of circularity measurement data transmission between cloud servers and round-hole data sources through Open CV. Accurate inspection of circular characteristics, particularly roundness errors, is vital for lubricant distribution, assemblies, and rotational force innovation. Circularity measurement techniques employ algorithms like the minimal zone circle tolerance algorithm. Vision inspection systems, utilizing image processing techniques, can promptly and accurately detect quality concerns by analyzing the model's surface through circular dimension analysis. This involves sending the model's image to a computer, which employs techniques such as Hough Transform, Edge Detection, and Contour Analysis to identify circular features and extract relevant parameters. This method is utilized in the camera industry and component assembly. To assess the performance, a comparative experiment was conducted between the non-contact-based 3SMVI system and the contact-based CMM system widely used in various industries for roundness evaluation. The CMM technique is known for its high precision but is time-consuming. Experimental results indicated a variation of 5 to 9.6 micrometers between the two methods. It is suggested that using a high-resolution camera and appropriate lighting conditions can further enhance result precision.
    Matched MeSH terms: Cloud Computing
  17. Zhao Z, Alli H, Ahmadipour M, Che Me R
    PLoS One, 2024;19(8):e0300266.
    PMID: 39173012 DOI: 10.1371/journal.pone.0300266
    The importance of incorporating an agile approach into creating sustainable products has been widely discussed. This approach can enhance innovation integration, improve adaptability to changing development circumstances, and increase the efficiency and quality of the product development process. While many agile methods have originated in the software development context and have been formulated based on successful software projects, they often fail due to incorrect procedures and a lack of acceptance, preventing deep integration into the process. Additionally, decision-making for market evaluation is often hindered by unclear and subjective information. Therefore, this study introduces an extended TOPSIS (Technique for Order Performance by Similarity to Ideal Solution) method for sustainable product development. This method leverages the benefits of cloud model theory to address randomness and uncertainty (intrapersonal uncertainty) and the advantages of rough set theory to flexibly handle market demand uncertainty without requiring extra information. The study proposes an integrated weighting method that considers both subjective and objective weights to determine comprehensive criteria weights. It also presents a new framework, named Sustainable Agility of Product Development (SAPD), which aims to evaluate criteria for assessing sustainable product development. To validate the effectiveness of this proposed method, a case study is conducted on small and medium enterprises in China. The obtained results show that the company needs to conduct product structure research and development to realize new product functions.
    Matched MeSH terms: Cloud Computing
  18. Abdulhamid SM, Abd Latiff MS, Abdul-Salaam G, Hussain Madni SH
    PLoS One, 2016;11(7):e0158102.
    PMID: 27384239 DOI: 10.1371/journal.pone.0158102
    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
    Matched MeSH terms: Cloud Computing*
  19. Hussien HM, Yasin SM, Udzir NI, Ninggal MIH
    Sensors (Basel), 2021 Apr 02;21(7).
    PMID: 33918266 DOI: 10.3390/s21072462
    Blockchain technology provides a tremendous opportunity to transform current personal health record (PHR) systems into a decentralised network infrastructure. However, such technology possesses some drawbacks, such as issues in privacy and storage capacity. Given its transparency and decentralised features, medical data are visible to everyone on the network and are inappropriate for certain medical applications. By contrast, storing vast medical data, such as patient medical history, laboratory tests, X-rays, and MRIs, significantly affect the repository storage of blockchain. This study bridges the gap between PHRs and blockchain technology by offloading the vast medical data into the InterPlanetary File System (IPFS) storage and establishing an enforced cryptographic authorisation and access control scheme for outsourced encrypted medical data. The access control scheme is constructed on the basis of the new lightweight cryptographic concept named smart contract-based attribute-based searchable encryption (SC-ABSE). This newly cryptographic primitive is developed by extending ciphertext-policy attribute-based encryption (CP-ABE) and searchable symmetric encryption (SSE) and by leveraging the technology of smart contracts to achieve the following: (1) efficient and secure fine-grained access control of outsourced encrypted data, (2) confidentiality of data by eliminating trusted private key generators, and (3) multikeyword searchable mechanism. Based on decisional bilinear Diffie-Hellman hardness assumptions (DBDH) and discrete logarithm (DL) problems, the rigorous security indistinguishability analysis indicates that SC-ABSE is secure against the chosen-keyword attack (CKA) and keyword secrecy (KS) in the standard model. In addition, user collusion attacks are prevented, and the tamper-proof resistance of data is ensured. Furthermore, security validation is verified by simulating a formal verification scenario using Automated Validation of Internet Security Protocols and Applications (AVISPA), thereby unveiling that SC-ABSE is resistant to man-in-the-middle (MIM) and replay attacks. The experimental analysis utilised real-world datasets to demonstrate the efficiency and utility of SC-ABSE in terms of computation overhead, storage cost and communication overhead. The proposed scheme is also designed and developed to evaluate throughput and latency transactions using a standard benchmark tool known as Caliper. Lastly, simulation results show that SC-ABSE has high throughput and low latency, with an ultimate increase in network life compared with traditional healthcare systems.
    Matched MeSH terms: Cloud Computing
  20. Abdullahi M, Ngadi MA
    PLoS One, 2016;11(6):e0158229.
    PMID: 27348127 DOI: 10.1371/journal.pone.0158229
    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.
    Matched MeSH terms: Cloud Computing
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links