Stochastic computing (SC) is an alternative computing domain for ubiquitous deterministic computing whereby a single logic gate can perform the arithmetic operation by exploiting the nature of probability math. SC was proposed in the 1960s when binary computing was expensive. However, presently, SC started to regain interest after the widespread of deep learning application, specifically the convolutional neural network (CNN) algorithm due to its practicality in hardware implementation. Although not all computing functions can translate to the SC domain, several useful function blocks related to the CNN algorithm had been proposed and tested by researchers. An evolution of CNN, namely, binarised neural network, had also gained attention in the edge computing due to its compactness and computing efficiency. This study reviews various SC CNN hardware implementation methodologies. Firstly, we review the fundamental concepts of SC and the circuit structure and then compare the advantages and disadvantages amongst different SC methods. Finally, we conclude the overview of SC in CNN and make suggestions for widespread implementation.
birgHPC, a bootable Linux Live CD has been developed to create high-performance clusters for bioinformatics and molecular dynamics studies using any Local Area Network (LAN)-networked computers. birgHPC features automated hardware and slots detection as well as provides a simple job submission interface. The latest versions of GROMACS, NAMD, mpiBLAST and ClustalW-MPI can be run in parallel by simply booting the birgHPC CD or flash drive from the head node, which immediately positions the rest of the PCs on the network as computing nodes. Thus, a temporary, affordable, scalable and high-performance computing environment can be built by non-computing-based researchers using low-cost commodity hardware.
Wireless sensor networks (WSNs) include sensor nodes in which each node is able to monitor the physical area and send collected information to the base station for further analysis. The important key of WSNs is detection and coverage of target area which is provided by random deployment. This paper reviews and addresses various area detection and coverage problems in sensor network. This paper organizes many scenarios for applying sensor node movement for improving network coverage based on bioinspired evolutionary algorithm and explains the concern and objective of controlling sensor node coverage. We discuss area coverage and target detection model by evolutionary algorithm.
Computational chemistry is a discipline that concerns the computing of physical and chemical properties of atoms and molecules using the fundamentals of quantum mechanics. The expense of computational chemistry calculations is significant and limited by available computational capabilities. The use of high-performance computing clusters alleviate such calculations. However, as high-performance computing (HPC) clusters have always required a balance between four major factors: raw computing power, memory size, I/O capacity, and communication capacity. In this paper, we present the results of standard HPC benchmarks in order to help assess the performance characteristics of the various hardware and software components of a home-built commodity-class Linux cluster. We optimized a range of TCP/MPICH parameters and achieved a maximum MPICH bandwidth of 666 Mbps. The bandwidth and latency of GA put/get operations were better than the corresponding MPICH send/receive ones. We also examined the NFS, PVFS2, and Lustre parallel filesystems and Lustre provided the best read/write bandwidths with more than 90% of those of the local filesystem.
Cloud computing (CC) has recently been receiving tremendous attention from the IT industry and academic researchers. CC leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. Cloud services provide dynamically scalable services through the Internet on demand. Therefore, service provisioning plays a key role in CC. The cloud customer must be able to select appropriate services according to his or her needs. Several approaches have been proposed to solve the service selection problem, including multicriteria decision analysis (MCDA). MCDA enables the user to choose from among a number of available choices. In this paper, we analyze the application of MCDA to service selection in CC. We identify and synthesize several MCDA techniques and provide a comprehensive analysis of this technology for general readers. In addition, we present a taxonomy derived from a survey of the current literature. Finally, we highlight several state-of-the-art practical aspects of MCDA implementation in cloud computing service selection. The contributions of this study are four-fold: (a) focusing on the state-of-the-art MCDA techniques, (b) highlighting the comparative analysis and suitability of several MCDA methods, (c) presenting a taxonomy through extensive literature review, and (d) analyzing and summarizing the cloud computing service selections in different scenarios.
In the past, simulating charge dynamics in solid state devices, such as current mobility, transient current drift velocities are done on mainframe systems or on high performance computing facilities. This is due to the fact that, such simulations are costly in terms of computational requirements when implemented on a single processor-based personal computers (PCs). When simulating charge dynamics, large ensembles of particles are usually preferred, such as exceeding 40000 particles, to ensure a numerically sound result. When implementing this type of simulation on a single processor PCs using the conventional ensemble or single particle Monte Carlo method, the computational time is very long even on the fast 2.0 MHz PCs. Lately, a more efficient, easily made available tools and cost effective solution to this problem is the application of an array of PCs employed in a parallel application. This is done using a computer cluster network in a master-slave model. In this paper we report the development of a LINUX cluster for the purpose of implementing parallel ensemble Monte Carlo modelling for solid states device. We have proposed the use of Parallel Virtual Machine (PVM) standards when running the parallel algorithm of the ensemble MC simulation. Some results of the development are also presented in this paper.