Displaying publications 1 - 20 of 1493 in total

Abstract:
Sort:
  1. Chew LJ, Haw SC, Subramaniam S
    F1000Res, 2021;10:937.
    PMID: 34868563 DOI: 10.12688/f1000research.73060.1
    Background: A recommender system captures the user preferences and behaviour to provide a relevant recommendation to the user. In a hybrid model-based recommender system, it requires a pre-trained data model to generate recommendations for a user. Ontology helps to represent the semantic information and relationships to model the expressivity and linkage among the data. Methods: We enhanced the matrix factorization model accuracy by utilizing ontology to enrich the information of the user-item matrix by integrating the item-based and user-based collaborative filtering techniques. In particular, the combination of enriched data, which consists of semantic similarity together with rating pattern, will help to reduce the cold start problem in the model-based recommender system. When the new user or item first coming into the system, we have the user demographic or item profile that linked to our ontology. Thus, semantic similarity can be calculated during the item-based and user-based collaborating filtering process. The item-based and user-based filtering process are used to predict the unknown rating of the original matrix. Results: Experimental evaluations have been carried out on the MovieLens 100k dataset to demonstrate the accuracy rate of our proposed approach as compared to the baseline method using (i) Singular Value Decomposition (SVD) and (ii) combination of item-based collaborative filtering technique with SVD. Experimental results demonstrated that our proposed method has reduced the data sparsity from 0.9542% to 0.8435%. In addition, it also indicated that our proposed method has achieved better accuracy with Root Mean Square Error (RMSE) of 0.9298, as compared to the baseline method (RMSE: 0.9642) and the existing method (RMSE: 0.9492). Conclusions: Our proposed method enhanced the dataset information by integrating user-based and item-based collaborative filtering techniques. The experiment results shows that our system has reduced the data sparsity and has better accuracy as compared to baseline method and existing method.
    Matched MeSH terms: Algorithms*
  2. Anjum ZM, Said DM, Hassan MY, Leghari ZH, Sahar G
    PLoS One, 2022;17(4):e0264958.
    PMID: 35417475 DOI: 10.1371/journal.pone.0264958
    The installation of Distributed Generation (DG) units in the Radial Distribution Networks (RDNs) has significant potential to minimize active power losses in distribution networks. However, inaccurate size(s) and location(s) of DG units increase power losses and associated Annual Financial Losses (AFL). A comprehensive review of the literature reveals that existing analytical, metaheuristic and hybrid algorithms employed on DG allocation problems trap in local or global optima resulting in higher power losses. To address these limitations, this article develops a parallel hybrid Arithmetic Optimization Algorithm and Salp Swarm Algorithm (AOASSA) for the optimal sizing and placement of DGs in the RDNs. The proposed parallel hybrid AOASSA enables the mutual benefit of both algorithms, i.e., the exploration capability of the SSA and the exploitation capability of the AOA. The performance of the proposed algorithm has been analyzed against the hybrid Arithmetic Optimization Algorithm Particle Swarm Optimization (AOAPSO), Salp Swarm Algorithm Particle Swarm Optimization (SSAPSO), standard AOA, SSA, and Particle Swarm Optimization (PSO) algorithms. The results obtained reveals that the proposed algorithm produces quality solutions and minimum power losses in RDNs. The Power Loss Reduction (PLR) obtained with the proposed algorithm has also been validated against recent analytical, metaheuristic and hybrid optimization algorithms with the help of three cases based on the number of DG units allocated. Using the proposed algorithm, the PLR and associated AFL reduction of the 33-bus and 69-bus RDNs improved to 65.51% and 69.14%, respectively. This study will help the local distribution companies to minimize power losses and associated AFL in the long-term planning paradigm.
    Matched MeSH terms: Algorithms*
  3. Devan PAM, Hussin FA, Ibrahim RB, Bingi K, Nagarajapandian M, Assaad M
    Sensors (Basel), 2022 Jan 13;22(2).
    PMID: 35062578 DOI: 10.3390/s22020617
    This paper proposes a novel hybrid arithmetic-trigonometric optimization algorithm (ATOA) using different trigonometric functions for complex and continuously evolving real-time problems. The proposed algorithm adopts different trigonometric functions, namely sin, cos, and tan, with the conventional sine cosine algorithm (SCA) and arithmetic optimization algorithm (AOA) to improve the convergence rate and optimal search area in the exploration and exploitation phases. The proposed algorithm is simulated with 33 distinct optimization test problems consisting of multiple dimensions to showcase the effectiveness of ATOA. Furthermore, the different variants of the ATOA optimization technique are used to obtain the controller parameters for the real-time pressure process plant to investigate its performance. The obtained results have shown a remarkable performance improvement compared with the existing algorithms.
    Matched MeSH terms: Algorithms*
  4. Rehman MZ, Khan A, Ghazali R, Aamir M, Nawi NM
    PLoS One, 2021;16(8):e0255269.
    PMID: 34358237 DOI: 10.1371/journal.pone.0255269
    The Sine-Cosine algorithm (SCA) is a population-based metaheuristic algorithm utilizing sine and cosine functions to perform search. To enable the search process, SCA incorporates several search parameters. But sometimes, these parameters make the search in SCA vulnerable to local minima/maxima. To overcome this problem, a new Multi Sine-Cosine algorithm (MSCA) is proposed in this paper. MSCA utilizes multiple swarm clusters to diversify & intensify the search in-order to avoid the local minima/maxima problem. Secondly, during update MSCA also checks for better search clusters that offer convergence to global minima effectively. To assess its performance, we tested the MSCA on unimodal, multimodal and composite benchmark functions taken from the literature. Experimental results reveal that the MSCA is statistically superior with regards to convergence as compared to recent state-of-the-art metaheuristic algorithms, including the original SCA.
    Matched MeSH terms: Algorithms*
  5. FARANAK RABIEI, FUDZIAH ISMAIL, MOHAMED SULEIMAN
    Sains Malaysiana, 2013;42:1679-1687.
    In this article we proposed three explicit Improved Runge-Kutta (IRK) methods for solving first-order ordinary differential equations. These methods are two-step in nature and require lower number of stages compared to the classical Runge-Kutta method. Therefore the new scheme is computationally more efficient at achieving the same order of local accuracy. The order conditions of the new methods are obtained up to order five using Taylor series expansion and the third and fourth order methods with different stages are derived based on the order conditions. The free parameters are obtained through minimization of the error norm. Convergence of the method is proven and the stability regions are presented. To illustrate the efficiency of the method a number of problems are solved and numerical results showed that the method is more efficient compared with the existing Runge-Kutta method.
    Matched MeSH terms: Algorithms
  6. Bashir U, Jamaludin Md. Ali
    Sains Malaysiana, 2016;45:1557-1563.
    This study was concerned with shape preserving interpolation of 2D data. A piecewise C1 univariate rational quadratic trigonometric spline including three positive parameters was devised to produce a shaped interpolant for given shaped data. Positive and monotone curve interpolation schemes were presented to sustain the respective shape features of data. Each scheme was tested for plentiful shaped data sets to substantiate the assertion made in their construction. Moreover, these schemes were compared with conventional shape preserving rational quadratic splines to demonstrate the usefulness of their construction.
    Matched MeSH terms: Algorithms
  7. Premkumar R, Srinivasan A, Harini Devi KG, M D, E G, Jadhav P, et al.
    Biosystems, 2024 Mar;237:105142.
    PMID: 38340976 DOI: 10.1016/j.biosystems.2024.105142
    Single-cell analysis (SCA) improves the detection of cancer, the immune system, and chronic diseases from complicated biological processes. SCA techniques generate high-dimensional, innovative, and complex data, making traditional analysis difficult and impractical. In the different cell types, conventional cell sequencing methods have signal transformation and disease detection limitations. To overcome these challenges, various deep learning techniques (DL) have outperformed standard state-of-the-art computer algorithms in SCA techniques. This review discusses DL application in SCA and presents a detailed study on improving SCA data processing and analysis. Firstly, we introduced fundamental concepts and critical points of cell analysis techniques, which illustrate the application of SCA. Secondly, various effective DL strategies apply to SCA to analyze data and provide significant results from complex data sources. Finally, we explored DL as a future direction in SCA and highlighted new challenges and opportunities for the rapidly evolving field of single-cell omics.
    Matched MeSH terms: Algorithms
  8. Erten M, Tuncer I, Barua PD, Yildirim K, Dogan S, Tuncer T, et al.
    J Digit Imaging, 2023 Aug;36(4):1675-1686.
    PMID: 37131063 DOI: 10.1007/s10278-023-00827-8
    Microscopic examination of urinary sediments is a common laboratory procedure. Automated image-based classification of urinary sediments can reduce analysis time and costs. Inspired by cryptographic mixing protocols and computer vision, we developed an image classification model that combines a novel Arnold Cat Map (ACM)- and fixed-size patch-based mixer algorithm with transfer learning for deep feature extraction. Our study dataset comprised 6,687 urinary sediment images belonging to seven classes: Cast, Crystal, Epithelia, Epithelial nuclei, Erythrocyte, Leukocyte, and Mycete. The developed model consists of four layers: (1) an ACM-based mixer to generate mixed images from resized 224 × 224 input images using fixed-size 16 × 16 patches; (2) DenseNet201 pre-trained on ImageNet1K to extract 1,920 features from each raw input image, and its six corresponding mixed images were concatenated to form a final feature vector of length 13,440; (3) iterative neighborhood component analysis to select the most discriminative feature vector of optimal length 342, determined using a k-nearest neighbor (kNN)-based loss function calculator; and (4) shallow kNN-based classification with ten-fold cross-validation. Our model achieved 98.52% overall accuracy for seven-class classification, outperforming published models for urinary cell and sediment analysis. We demonstrated the feasibility and accuracy of deep feature engineering using an ACM-based mixer algorithm for image preprocessing combined with pre-trained DenseNet201 for feature extraction. The classification model was both demonstrably accurate and computationally lightweight, making it ready for implementation in real-world image-based urine sediment analysis applications.
    Matched MeSH terms: Algorithms*
  9. Adeshina AM, Hashim R
    Interdiscip Sci, 2017 Mar;9(1):140-152.
    PMID: 26754740 DOI: 10.1007/s12539-015-0140-9
    Diagnostic radiology is a core and integral part of modern medicine, paving ways for the primary care physicians in the disease diagnoses, treatments and therapy managements. Obviously, all recent standard healthcare procedures have immensely benefitted from the contemporary information technology revolutions, apparently revolutionizing those approaches to acquiring, storing and sharing of diagnostic data for efficient and timely diagnosis of diseases. Connected health network was introduced as an alternative to the ageing traditional concept in healthcare system, improving hospital-physician connectivity and clinical collaborations. Undoubtedly, the modern medicinal approach has drastically improved healthcare but at the expense of high computational cost and possible breach of diagnosis privacy. Consequently, a number of cryptographical techniques are recently being applied to clinical applications, but the challenges of not being able to successfully encrypt both the image and the textual data persist. Furthermore, processing time of encryption-decryption of medical datasets, within a considerable lower computational cost without jeopardizing the required security strength of the encryption algorithm, still remains as an outstanding issue. This study proposes a secured radiology-diagnostic data framework for connected health network using high-performance GPU-accelerated Advanced Encryption Standard. The study was evaluated with radiology image datasets consisting of brain MR and CT datasets obtained from the department of Surgery, University of North Carolina, USA, and the Swedish National Infrastructure for Computing. Sample patients' notes from the University of North Carolina, School of medicine at Chapel Hill were also used to evaluate the framework for its strength in encrypting-decrypting textual data in the form of medical report. Significantly, the framework is not only able to accurately encrypt and decrypt medical image datasets, but it also successfully encrypts and decrypts textual data in Microsoft Word document, Microsoft Excel and Portable Document Formats which are the conventional format of documenting medical records. Interestingly, the entire encryption and decryption procedures were achieved at a lower computational cost using regular hardware and software resources without compromising neither the quality of the decrypted data nor the security level of the algorithms.
    Matched MeSH terms: Algorithms*
  10. Jalil-Masir H, Fattahi R, Ghanbari-Adivi E, Asadi Aghbolaghi M, Ehteram M, Ahmed AN, et al.
    Environ Sci Pollut Res Int, 2022 Sep;29(44):67180-67213.
    PMID: 35522411 DOI: 10.1007/s11356-022-20472-y
    Predicting sediment transport rate (STR) in the presence of flexible vegetation is a critical task for modelers. Sediment transport modeling methods in the coastal region is equally challenging due to the nonlinearity of the STR-vegetation interaction. In the present study, the kernel extreme learning model (KELM) was integrated with the seagull optimization algorithm (SEOA), the crow optimization algorithm (COA), the firefly algorithm (FFA), and particle swarm optimization (PSO) to estimate the STR in the presence of vegetation cover. The rigidity index, D50/wave height, Newton number, drag coefficient, and cover density were used as inputs to the models. The root mean square error (RMSE), the mean absolute error (MAE), and percentage of bias (PBIAS) were used to evaluate the capability of models. This study applied the novel ensemble model, and the inclusive multiple model (IMM), to assemble the outputs of the KELM models. In addition, the innovations of this study were the introduction of a new IMM model, and the use of new hybrid KELM models for predicting STR and investigating the effects of various parameters on the STR. At the testing level, the MAE of the IMM model was 22, 60, 68, 73, and 76% lower than those of the KELM-SEOA, KELM-COA, KELM-PSO, and KELM models, respectively. The IMM had a PBIAS of 5, whereas the KELM-SEOA, KELM-COA, KELM-PSOA, and KELM had PBIAS of 9, 12, 14, 18, and 21%, respectively. The results indicated that the increasing drag coefficient and D50/wave height had decreased the STR. From the findings, it was revealed that the IMM and KELM-SEOA had higher predictive ability for STR. Since the sediment is one of the most important sources of environmental pollution, therefore, this study is useful for monitoring and controlling environmental pollution.
    Matched MeSH terms: Algorithms*
  11. Anam C, Naufal A, Sutanto H, Arifin Z, Hidayanto E, Tan LK, et al.
    Biomed Phys Eng Express, 2023 May 30;9(4).
    PMID: 37216929 DOI: 10.1088/2057-1976/acd785
    Objective. To develop an algorithm to measure slice thickness running on three types of Catphan phantoms with the ability to adapt to any misalignment and rotation of the phantoms.Method. Images of Catphan 500, 504, and 604 phantoms were examined. In addition, images with various slice thicknesses ranging from 1.5 to 10.0 mm, distance to the iso-center and phantom rotations were also examined. The automatic slice thickness algorithm was carried out by processing only objects within a circle having a diameter of half the diameter of the phantom. A segmentation was performed within an inner circle with dynamic thresholds to produce binary images with wire and bead objects within it. Region properties were used to distinguish wire ramps and bead objects. At each identified wire ramp, the angle was detected using the Hough transform. Profile lines were then placed on each ramp based on the centroid coordinates and detected angles, and the full-width at half maximum (FWHM) was determined for the average profile. The slice thickness was obtained by multiplying the FWHM by the tangent of the ramp angle (23°).Results. Automatic measurements work well and have only a small difference (<0.5 mm) from manual measurements. For slice thickness variation, automatic measurement successfully performs segmentation and correctly locates the profile line on all wire ramps. The results show measured slice thicknesses that are close (<3 mm) to the nominal thickness at thin slices, but slightly deviated for thicker slices. There is a strong correlation (R2= 0.873) between automatic and manual measurements. Testing the algorithm at various distances from the iso-center and phantom rotation angle also produced accurate results.Conclusion. An automated algorithm for measuring slice thickness on three types of Catphan CT phantom images has been developed. The algorithm works well on various thicknesses, distances from the iso-center, and phantom rotations.
    Matched MeSH terms: Algorithms*
  12. Ling L, Huang L, Wang J, Zhang L, Wu Y, Jiang Y, et al.
    Interdiscip Sci, 2023 Dec;15(4):560-577.
    PMID: 37160860 DOI: 10.1007/s12539-023-00570-2
    Soft subspace clustering (SSC), which analyzes high-dimensional data and applies various weights to each cluster class to assess the membership degree of each cluster to the space, has shown promising results in recent years. This method of clustering assigns distinct weights to each cluster class. By introducing spatial information, enhanced SSC algorithms improve the degree to which intraclass compactness and interclass separation are achieved. However, these algorithms are sensitive to noisy data and have a tendency to fall into local optima. In addition, the segmentation accuracy is poor because of the influence of noisy data. In this study, an SSC approach that is based on particle swarm optimization is suggested with the intention of reducing the interference caused by noisy data. The particle swarm optimization method is used to locate the best possible clustering center. Second, increasing the amount of geographical membership makes it possible to utilize the spatial information to quantify the link between different clusters in a more precise manner. In conclusion, the extended noise clustering method is implemented in order to maximize the weight. Additionally, the constraint condition of the weight is changed from the equality constraint to the boundary constraint in order to reduce the impact of noise. The methodology presented in this research works to reduce the amount of sensitivity the SSC algorithm has to noisy data. It is possible to demonstrate the efficacy of this algorithm by using photos with noise already present or by introducing noise to existing photographs. The revised SSC approach based on particle swarm optimization (PSO) is demonstrated to have superior segmentation accuracy through a number of trials; as a result, this work gives a novel method for the segmentation of noisy images.
    Matched MeSH terms: Algorithms*
  13. Memon A, Wazir Bin Mustafa M, Anjum W, Ahmed A, Ullah S, Altbawi SMA, et al.
    PLoS One, 2022;17(5):e0265611.
    PMID: 35551274 DOI: 10.1371/journal.pone.0265611
    A brushless double-fed induction generator (BDFIG) has shown tremendous success in wind turbines due to its robust brushless design, smooth operation, and variable speed characteristics. However, the research regarding controlling of machine during low voltage ride through (LVRT) need greater attention as it may cause total disconnection of machine. In addition, the BDFIG based wind turbines must be capable of providing controlled amount of reactive power to the grid as per modern grid code requirements. Also, a suitable dynamic response of machine during both normal and fault conditions needs to be ensured. This paper, as such, attempts to provide reactive power to the grid by analytically calculating the decaying flux and developing a rotor side converter control scheme accordingly. Furthermore, the dynamic response and LVRT capability of the BDFIG is enhanced by using one of the very intelligent optimization algorithms called the Salp Swarm Algorithm (SSA). To prove the efficacy of the proposed control scheme, its performance is compared with that of the particle swan optimization (PSO) based controller in terms of limiting the fault current, regulating active and reactive power, and maintaining the stable operation of the power system under identical operating conditions. The simulation results show that the proposed control scheme significantly improves the dynamic response and LVRT capability of the developed BDFIG based wind energy conversion system; thus proves its essence and efficacy.
    Matched MeSH terms: Algorithms*
  14. Altalib MK, Salim N
    Molecules, 2021 Nov 03;26(21).
    PMID: 34771076 DOI: 10.3390/molecules26216669
    Traditional drug development is a slow and costly process that leads to the production of new drugs. Virtual screening (VS) is a computational procedure that measures the similarity of molecules as one of its primary tasks. Many techniques for capturing the biological similarity between a test compound and a known target ligand have been established in ligand-based virtual screens (LBVSs). However, despite the good performances of the above methods compared to their predecessors, especially when dealing with molecules that have structurally homogenous active elements, they are not satisfied when dealing with molecules that are structurally heterogeneous. The main aim of this study is to improve the performance of similarity searching, especially with molecules that are structurally heterogeneous. The Siamese network will be used due to its capability to deal with complicated data samples in many fields. The Siamese multi-layer perceptron architecture will be enhanced by using two similarity distance layers with one fused layer, then multiple layers will be added after the fusion layer, and then the nodes of the model that contribute less or nothing during inference according to their signal-to-noise ratio values will be pruned. Several benchmark datasets will be used, which are: the MDL Drug Data Report (MDDR-DS1, MDDR-DS2, and MDDR-DS3), the Maximum Unbiased Validation (MUV), and the Directory of Useful Decoys (DUD). The results show the outperformance of the proposed method on standard Tanimoto coefficient (TAN) and other methods. Additionally, it is possible to reduce the number of nodes in the Siamese multilayer perceptron model while still keeping the effectiveness of recall on the same level.
    Matched MeSH terms: Algorithms*
  15. Salahshour S, Ahmadian A, Salimi M, Ferrara M, Baleanu D
    Chaos, 2019 Aug;29(8):083110.
    PMID: 31472490 DOI: 10.1063/1.5096022
    Realizing the behavior of the solution in the asymptotic situations is essential for repetitive applications in the control theory and modeling of the real-world systems. This study discusses a robust and definitive attitude to find the interval approximate asymptotic solutions of fractional differential equations (FDEs) with the Atangana-Baleanu (A-B) derivative. In fact, such critical tasks require to observe precisely the behavior of the noninterval case at first. In this regard, we initially shed light on the noninterval cases and analyze the behavior of the approximate asymptotic solutions, and then, we introduce the A-B derivative for FDEs under interval arithmetic and develop a new and reliable approximation approach for fractional interval differential equations with the interval A-B derivative to get the interval approximate asymptotic solutions. We exploit Laplace transforms to get the asymptotic approximate solution based on the interval asymptotic A-B fractional derivatives under interval arithmetic. The techniques developed here provide essential tools for finding interval approximation asymptotic solutions under interval fractional derivatives with nonsingular Mittag-Leffler kernels. Two cases arising in the real-world systems are modeled under interval notion and given to interpret the behavior of the interval approximate asymptotic solutions under different conditions as well as to validate this new approach. This study highlights the importance of the asymptotic solutions for FDEs regardless of interval or noninterval parameters.
    Matched MeSH terms: Algorithms
  16. Ibrahim Gambo, Nor Haniza Sarmin, Sanaa Mohamed Saleh Omer
    MATEMATIKA, 2019;35(2):237-247.
    MyJurnal
    In this work, a non-abelian metabelian group is represented by G while represents conjugacy class graph. Conjugacy class graph of a group is that graph associated with the conjugacy classes of the group. Its vertices are the non-central conjugacy classes of the group, and two distinct vertices are joined by an edge if their cardinalities are not coprime. A group is referred to as metabelian if there exits an abelian normal subgroup in which the factor group is also abelian. It has been proven earlier that 25 non-abelian metabelian groups which have order less than 24, which are considered in this work, exist. In this article, the conjugacy class graphs of non-abelian metabelian groups of order less than 24 are determined as well as examples of some finite groups associated to other graphs are given.
    Matched MeSH terms: Algorithms
  17. Ravshan Ashurov
    MyJurnal
    The partial integrals of the N-fold Fourier integrals connected with elliptic polynomials (not necessarily
    homogeneous; principal part of which has a strictly convex level surface) are considered. It is proved that if a + s > (N – 1)/2 and ap = N then the Riesz means of the nonnegative order s of the N-fold Fourier integrals of continuous finite functions from the Sobolev spaces Wpa (RN) converge uniformly on every compact set, and if a + s > (N – 1)/2 and ap = N, then for any x0 ∈ RN there exists a continuous finite function from the Sobolev space such, that the corresponding Riesz means of the N-fold Fourier integrals diverge to infinity at x0. AMS 2000 Mathematics Subject Classifications: Primary 42B08; Secondary 42C14.
    Matched MeSH terms: Algorithms
  18. Sharifah, S.Y., Norsheila, F., Muladi
    ASM Science Journal, 2007;1(1):19-25.
    MyJurnal
    Orthogonal Frequency Division Multiplexing (OFDM) is a successful technique in wireless communication systems. Frequency offset in the OFDM system leads to loss of orthogonality among subcarriers which results in the occurrence of intercarrier interference (ICI). To improve the efficiency of bandwidth performance in the ICI self-cancellation scheme, frequency domain partial response signaling (PRS) was investigated. In this study, the integer polynomial partial response coefficients were exploited to enhance carrier-to-interference power ratio (CIR) in the OFDM system. CIR was enhanced up to 4.1 dB to 5 dB when the lengths of PRS polynomial, K was 2 and 5, respectively.
    Matched MeSH terms: Algorithms
  19. Seer QH, Nandong J
    ISA Trans, 2017 Mar;67:233-245.
    PMID: 28160974 DOI: 10.1016/j.isatra.2017.01.017
    Open-loop unstable systems with time-delays are often encountered in process industry, which are often more difficult to control than stable processes. In this paper, the stabilization by PID controller of second-order unstable processes, which can be represented as second-order deadtime with an unstable pole (SODUP) and second-order deadtime with two unstable poles (SODTUP), is performed via the necessary and sufficient criteria of Routh-Hurwitz stability analysis. The stability analysis provides improved understanding on the existence of a stabilizing range of each PID parameter. Three simple PID tuning algorithms are proposed to provide desired closed-loop performance-robustness within the stable regions of controller parameters obtained via the stability analysis. The proposed PID controllers show improved performance over those derived via some existing methods.
    Matched MeSH terms: Algorithms
  20. Rahim LA, Kudiri KM, Bahattacharjee S
    PLoS One, 2019;14(5):e0214044.
    PMID: 31120878 DOI: 10.1371/journal.pone.0214044
    The parallelisation of big data is emerging as an important framework for large-scale parallel data applications such as seismic data processing. The field of seismic data is so large or complex that traditional data processing software is incapable of dealing with it. For example, the implementation of parallel processing in seismic applications to improve the processing speed is complex in nature. To overcome this issue, a simple technique which that helps provide parallel processing for big data applications such as seismic algorithms is needed. In our framework, we used the Apache Hadoop with its MapReduce function. All experiments were conducted on the RedHat CentOS platform. Finally, we studied the bottlenecks and improved the overall performance of the system for seismic algorithms (stochastic inversion).
    Matched MeSH terms: Algorithms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links