Displaying publications 1 - 20 of 22 in total

Abstract:
Sort:
  1. Khan N, Yaqoob I, Hashem IA, Inayat Z, Ali WK, Alam M, et al.
    ScientificWorldJournal, 2014;2014:712826.
    PMID: 25136682 DOI: 10.1155/2014/712826
    Big Data has gained much attention from the academia and the IT industry. In the digital and computing world, information is generated and collected at a rate that rapidly exceeds the boundary range. Currently, over 2 billion people worldwide are connected to the Internet, and over 5 billion individuals own mobile phones. By 2020, 50 billion devices are expected to be connected to the Internet. At this point, predicted data production will be 44 times greater than that in 2009. As information is transferred and shared at light speed on optic fiber and wireless networks, the volume of data and the speed of market growth increase. However, the fast growth rate of such large data generates numerous challenges, such as the rapid growth of data, transfer speed, diverse data, and security. Nonetheless, Big Data is still in its infancy stage, and the domain has not been reviewed in general. Hence, this study comprehensively surveys and classifies the various attributes of Big Data, including its nature, definitions, rapid growth rate, volume, management, analysis, and security. This study also proposes a data life cycle that uses the technologies and terminologies of Big Data. Future research directions in this field are determined based on opportunities and several open issues in Big Data domination. These research directions facilitate the exploration of the domain and the development of optimal techniques to address Big Data.
    Matched MeSH terms: Automatic Data Processing*
  2. Rasheed W, Neoh YY, Bin Hamid NH, Reza F, Idris Z, Tang TB
    Comput Biol Med, 2017 10 01;89:573-583.
    PMID: 28551109 DOI: 10.1016/j.compbiomed.2017.05.005
    Functional neuroimaging modalities play an important role in deciding the diagnosis and course of treatment of neuronal dysfunction and degeneration. This article presents an analytical tool with visualization by exploiting the strengths of the MEG (magnetoencephalographic) neuroimaging technique. The tool automates MEG data import (in tSSS format), channel information extraction, time/frequency decomposition, and circular graph visualization (connectogram) for simple result inspection. For advanced users, the tool also provides magnitude squared coherence (MSC) values allowing personalized threshold levels, and the computation of default model from MEG data of control population. Default model obtained from healthy population data serves as a useful benchmark to diagnose and monitor neuronal recovery during treatment. The proposed tool further provides optional labels with international 10-10 system nomenclature in order to facilitate comparison studies with EEG (electroencephalography) sensor space. Potential applications in epilepsy and traumatic brain injury studies are also discussed.
    Matched MeSH terms: Automatic Data Processing/methods*
  3. Kamaludin H, Mahdin H, Abawajy JH
    PLoS One, 2018;13(3):e0193951.
    PMID: 29565982 DOI: 10.1371/journal.pone.0193951
    Although Radio Frequency Identification (RFID) is poised to displace barcodes, security vulnerabilities pose serious challenges for global adoption of the RFID technology. Specifically, RFID tags are prone to basic cloning and counterfeiting security attacks. A successful cloning of the RFID tags in many commercial applications can lead to many serious problems such as financial losses, brand damage, safety and health of the public. With many industries such as pharmaceutical and businesses deploying RFID technology with a variety of products, it is important to tackle RFID tag cloning problem and improve the resistance of the RFID systems. To this end, we propose an approach for detecting cloned RFID tags in RFID systems with high detection accuracy and minimal overhead thus overcoming practical challenges in existing approaches. The proposed approach is based on consistency of dual hash collisions and modified count-min sketch vector. We evaluated the proposed approach through extensive experiments and compared it with existing baseline approaches in terms of execution time and detection accuracy under varying RFID tag cloning ratio. The results of the experiments show that the proposed approach outperforms the baseline approaches in cloned RFID tag detection accuracy.
    Matched MeSH terms: Automatic Data Processing/methods
  4. Anisi MH, Abdullah AH, Razak SA, Ngadi MA
    Sensors (Basel), 2012 03 27;12(4):3964-96.
    PMID: 23443040 DOI: 10.3390/s120403964
    Recent years have witnessed a growing interest in deploying large populations of microsensors that collaborate in a distributed manner to gather and process sensory data and deliver them to a sink node through wireless communications systems. Currently, there is a lot of interest in data routing for Wireless Sensor Networks (WSNs) due to their unique challenges compared to conventional routing in wired networks. In WSNs, each data routing approach follows a specific goal (goals) according to the application. Although the general goal of every data routing approach in WSNs is to extend the network lifetime and every approach should be aware of the energy level of the nodes, data routing approaches may focus on one (or some) specific goal(s) depending on the application. Thus, existing approaches can be categorized according to their routing goals. In this paper, the main goals of data routing approaches in sensor networks are described. Then, the best known and most recent data routing approaches in WSNs are classified and studied according to their specific goals.
    Matched MeSH terms: Automatic Data Processing/methods*
  5. Ag Z, Cheong SK
    Malays J Pathol, 1995 Dec;17(2):77-81.
    PMID: 8935130
    A system for computerising full blood picture reporting developed in-house using dBASE IV on IBM-compatible microcomputers in a local area network environment is described. The software package has a user-friendly interface which consists of a horizontal main menu bar with associated pull-down submenus. The package captures data directly from an automatic blood cell counter and provides options to modify or delete records, search for records, print interim, final or cumulative reports, record differential counts with an emulator, facilitate house-keeping activities which include backing-up databases and repairing corrupted indices. The implementation of this system has helped to improve the efficiency of reporting full blood picture in the haematology laboratory.
    Matched MeSH terms: Automatic Data Processing*
  6. Ji Y, Ashton L, Pedley SM, Edwards DP, Tang Y, Nakamura A, et al.
    Ecol Lett, 2013 Oct;16(10):1245-57.
    PMID: 23910579 DOI: 10.1111/ele.12162
    To manage and conserve biodiversity, one must know what is being lost, where, and why, as well as which remedies are likely to be most effective. Metabarcoding technology can characterise the species compositions of mass samples of eukaryotes or of environmental DNA. Here, we validate metabarcoding by testing it against three high-quality standard data sets that were collected in Malaysia (tropical), China (subtropical) and the United Kingdom (temperate) and that comprised 55,813 arthropod and bird specimens identified to species level with the expenditure of 2,505 person-hours of taxonomic expertise. The metabarcode and standard data sets exhibit statistically correlated alpha- and beta-diversities, and the two data sets produce similar policy conclusions for two conservation applications: restoration ecology and systematic conservation planning. Compared with standard biodiversity data sets, metabarcoded samples are taxonomically more comprehensive, many times quicker to produce, less reliant on taxonomic expertise and auditable by third parties, which is essential for dispute resolution.
    Matched MeSH terms: Automatic Data Processing*
  7. Onwuegbuzie IU, Abd Razak S, Fauzi Isnin I, Darwish TSJ, Al-Dhaqm A
    PLoS One, 2020;15(8):e0237154.
    PMID: 32797055 DOI: 10.1371/journal.pone.0237154
    Data prioritization of heterogeneous data in wireless sensor networks gives meaning to mission-critical data that are time-sensitive as this may be a matter of life and death. However, the standard IEEE 802.15.4 does not consider the prioritization of data. Prioritization schemes proffered in the literature have not adequately addressed this issue as proposed schemes either uses a single or complex backoff algorithm to estimate backoff time-slots for prioritized data. Subsequently, the carrier sense multiple access with collision avoidance scheme exhibits an exponentially increasing range of backoff times. These approaches are not only inefficient but result in high latency and increased power consumption. In this article, the concept of class of service (CS) was adopted to prioritize heterogeneous data (real-time and non-real-time), resulting in an optimized prioritized backoff MAC scheme called Class of Service Traffic Priority-based Medium Access Control (CSTP-MAC). This scheme classifies data into high priority data (HPD) and low priority data (LPD) by computing backoff times with expressions peculiar to the data priority class. The improved scheme grants nodes the opportunity to access the shared medium in a timely and power-efficient manner. Benchmarked against contemporary schemes, CSTP-MAC attained a 99% packet delivery ratio with improved power saving capability, which translates to a longer operational lifetime.
    Matched MeSH terms: Automatic Data Processing/methods*
  8. Loke SC, Kasmiran KA, Haron SA
    PLoS One, 2018;13(11):e0206420.
    PMID: 30412588 DOI: 10.1371/journal.pone.0206420
    Software optical mark recognition (SOMR) is the process whereby information entered on a survey form or questionnaire is converted using specialized software into a machine-readable format. SOMR normally requires input fields to be completely darkened, have no internal labels, or be filled with a soft pencil, otherwise mark detection will be inaccurate. Forms can also have print and scan artefacts that further increase the error rate. This article presents a new method of mark detection that improves over existing techniques based on pixel counting and simple thresholding. Its main advantage is that it can be used under a variety of conditions and yet maintain a high level of accuracy that is sufficient for scientific applications. Field testing shows no software misclassification in 5695 samples filled by trained personnel, and only two misclassifications in 6000 samples filled by untrained respondents. Sensitivity, specificity, and accuracy were 99.73%, 99.98%, and 99.94% respectively, even in the presence of print and scan artefacts, which was superior to other methods tested. A separate direct comparison for mark detection showed a sensitivity, specificity, and accuracy respectively of 99.7%, 100.0%, 100.0% (new method), 96.3%, 96.0%, 96.1% (pixel counting), and 99.9%, 99.8%, 99.8% (simple thresholding) on clean forms, and 100.0%, 99.1%, 99.3% (new method), 98.4%, 95.6%, 96.2% (pixel counting), 100.0%, 38.3%, 51.4% (simple thresholding) on forms with print artefacts. This method is designed for bubble and box fields, while other types such as handwriting fields require separate error control measures.
    Matched MeSH terms: Automatic Data Processing
  9. Mohd Yusof M, Takeda T, Mihara N, Matsumura Y
    Stud Health Technol Inform, 2020 Jun 16;270:1036-1040.
    PMID: 32570539 DOI: 10.3233/SHTI200319
    Health information systems (HIS) and clinical workflows generate medication errors that affect the quality of patient care. The rigorous evaluation of the medication process's error risk, control, and impact on clinical practice enable the understanding of latent and active factors that contribute to HIS-induced errors. This paper reports the preliminary findings of an evaluation case study of a 1000-bed Japanese secondary care teaching hospital using observation, interview, and document analysis methods. Findings were analysed from a process perspective by adopting a recently introduced framework known as Human, Organisation, Process, and Technology-fit. Process factors influencing risk in medication errors include template- and calendar-based systems, intuitive design, barcode check, ease of use, alert, policy, systematic task organisation, and safety culture Approaches for managing medication errors also exert an important role on error reduction and clinical workflow.
    Matched MeSH terms: Automatic Data Processing
  10. Shaharum SM, Sundaraj K, Palaniappan R
    Bosn J Basic Med Sci, 2012 Nov;12(4):249-55.
    PMID: 23198941
    The purpose of this paper is to present an evidence of automated wheeze detection system by a survey that can be very beneficial for asthmatic patients. Generally, for detecting asthma in a patient, stethoscope is used for ascertaining wheezes present. This causes a major problem nowadays because a number of patients tend to delay the interpretation time, which can lead to misinterpretations and in some worst cases to death. Therefore, the development of automated system would ease the burden of medical personnel. A further discussion on automated wheezes detection system will be presented later in the paper. As for the methodology, a systematic search of articles published as early as 1985 to 2012 was conducted. Important details including the hardware used, placement of hardware, and signal processing methods have been presented clearly thus hope to help and encourage future researchers to develop commercial system that will improve the diagnosing and monitoring of asthmatic patients.
    Matched MeSH terms: Automatic Data Processing
  11. Hassan A, Ibrahim F
    J Digit Imaging, 2011 Apr;24(2):308-13.
    PMID: 20386951 DOI: 10.1007/s10278-010-9283-8
    This paper presents the development of kidney TeleUltrasound consultation system. The TeleUltrasound system provides an innovative design that aids the acquisition, archiving, and dissemination of medical data and information over the internet as its backbone. The system provides data sharing to allow remote collaboration, viewing, consultation, and diagnosis of medical data. The design is layered upon a standard known as Digital Imaging and Communication in Medicine (DICOM). The DICOM standard defines protocols for exchanging medical images and their associated data. The TeleUltrasound system is an integrated solution for retrieving, processing, and archiving images and providing data storage management using Structured Query Language (SQL) database. Creating a web-based interface is an additional advantage to achieve global accessibility of experts that will widely open the opportunity of greater examination and multiple consultations. This system is equipped with a high level of data security and its performance has been tested with white, black, and gray box techniques. And the result was satisfactory. The overall system has been evaluated by several radiologists in Malaysia, United Arab Emirates, and Sudan, the result is shown within this paper.
    Matched MeSH terms: Automatic Data Processing/methods*
  12. Sheikh Ab Hamid S, Abd Rahman MN
    Cell Tissue Bank, 2010 Nov;11(4):401-5.
    PMID: 20582480 DOI: 10.1007/s10561-010-9188-2
    In Malaysia, tissue banking activities began in Universiti Sains Malaysia (USM) Tissue Bank in early 1990s. Since then a few other bone banks have been set up in other government hospitals and institutions. However, these banks are not governed by the national authority. In addition there is no requirement set by the national regulatory authority on coding and traceability for donated human tissues for transplantation. Hence, USM Tissue Bank has taken the initiatives to adopt a system that enables the traceability of tissues between the donor, the processed tissue and the recipient based on other international standards for tissue banks. The traceability trail has been effective and the bank is certified compliance to the international standard ISO 9001:2008.
    Matched MeSH terms: Automatic Data Processing/standards*
  13. Ramlan EI, Zauner KP
    Biosystems, 2011 Jul;105(1):14-24.
    PMID: 21396427 DOI: 10.1016/j.biosystems.2011.02.006
    Despite an exponential increase in computing power over the past decades, present information technology falls far short of expectations in areas such as cognitive systems and micro robotics. Organisms demonstrate that it is possible to implement information processing in a radically different way from what we have available in present technology, and that there are clear advantages from the perspective of power consumption, integration density, and real-time processing of ambiguous data. Accordingly, the question whether the current silicon substrate and associated computing paradigm is the most suitable approach to all types of computation has come to the fore. Macromolecular materials, so successfully employed by nature, possess uniquely promising properties as an alternate substrate for information processing. The two key features of macromolecules are their conformational dynamics and their self-assembly capabilities. The purposeful design of macromolecules capable of exploiting these features has proven to be a challenge, however, for some groups of molecules it is increasingly practicable. We here introduce an algorithm capable of designing groups self-assembling of nucleic acid molecules with multiple conformational states. Evaluation using natural and artificially designed nucleic acid molecules favours this algorithm significantly, as compared to the probabilistic approach. Furthermore, the thermodynamic properties of the generated candidates are within the same approximation as the customised trans-acting switching molecules reported in the laboratory.
    Matched MeSH terms: Automatic Data Processing/methods*
  14. Acharya UR, Oh SL, Hagiwara Y, Tan JH, Adeli H, Subha DP
    Comput Methods Programs Biomed, 2018 Jul;161:103-113.
    PMID: 29852953 DOI: 10.1016/j.cmpb.2018.04.012
    In recent years, advanced neurocomputing and machine learning techniques have been used for Electroencephalogram (EEG)-based diagnosis of various neurological disorders. In this paper, a novel computer model is presented for EEG-based screening of depression using a deep neural network machine learning approach, known as Convolutional Neural Network (CNN). The proposed technique does not require a semi-manually-selected set of features to be fed into a classifier for classification. It learns automatically and adaptively from the input EEG signals to differentiate EEGs obtained from depressive and normal subjects. The model was tested using EEGs obtained from 15 normal and 15 depressed patients. The algorithm attained accuracies of 93.5% and 96.0% using EEG signals from the left and right hemisphere, respectively. It was discovered in this research that the EEG signals from the right hemisphere are more distinctive in depression than those from the left hemisphere. This discovery is consistent with recent research and revelation that the depression is associated with a hyperactive right hemisphere. An exciting extension of this research would be diagnosis of different stages and severity of depression and development of a Depression Severity Index (DSI).
    Matched MeSH terms: Automatic Data Processing*
  15. Lim TA, Wong WH, Lim KY
    Med J Malaysia, 2005 Oct;60(4):432-40.
    PMID: 16570704
    The objective of this survey was to obtain a self-reported assessment of the use of information technology (IT) by final year medical students. Two hundred and sixty five students responded to a questionnaire survey. 81.5% of students considered their computer skills adequate, while 87.9% had access to computers outside the campus. Most students reported adequate skills at word processing, e-mailing and surfing the Internet. Fifty three percent of students spent three hours or more each week on the computer. While students indicated a general willingness to access Internet-based materials, further steps need to be taken to increase the use of this method of instruction.
    Matched MeSH terms: Automatic Data Processing/utilization
  16. Hearn RL
    Asian Pac Cens Forum, 1985 May;11(4):1-4, 9-14, 16.
    PMID: 12267276
    Matched MeSH terms: Automatic Data Processing*
  17. Pasquariella SK
    POPIN Bull, 1984 Dec.
    PMID: 12267287
    Matched MeSH terms: Automatic Data Processing*
  18. Kamel NS, Sayeed S, Ellis GA
    IEEE Trans Pattern Anal Mach Intell, 2008 Jun;30(6):1109-13.
    PMID: 18421114 DOI: 10.1109/TPAMI.2008.32
    Utilizing the multiple degrees of freedom offered by the data glove for each finger and the hand, a novel on-line signature verification system using the Singular Value Decomposition (SVD) numerical tool for signature classification and verification is presented. The proposed technique is based on the Singular Value Decomposition in finding r singular vectors sensing the maximal energy of glove data matrix A, called principal subspace, so the effective dimensionality of A can be reduced. Having modeled the data glove signature through its r-principal subspace, signature authentication is performed by finding the angles between the different subspaces. A demonstration of the data glove is presented as an effective high-bandwidth data entry device for signature verification. This SVD-based signature verification technique is tested and its performance is shown to be able to recognize forgery signatures with a false acceptance rate of less than 1.2%.
    Matched MeSH terms: Automatic Data Processing/methods*
  19. Al-batah MS, Isa NA, Klaib MF, Al-Betar MA
    Comput Math Methods Med, 2014;2014:181245.
    PMID: 24707316 DOI: 10.1155/2014/181245
    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.
    Matched MeSH terms: Automatic Data Processing
  20. Nolan MJ, Jex AR, Upcroft JA, Upcroft P, Gasser RB
    Electrophoresis, 2011 Aug;32(16):2075-90.
    PMID: 23479788
    We barcoded 25 in vitro isolates (representing 92 samples) of Giardia duodenalis from humans and other animals, which have been assembled by the Upcroft team at the Queensland Institute of Medical Research over a period of almost three decades. We used mutation scanning-coupled sequencing of loci in the triosephosphate isomerase, glutamate dehydrogenase and β-giardin genes, combined with phylogenetic analysis, to genetically characterise them. Specifically, the isolates (n514) of G. duodenalis from humans from Australia (AD113; BRIS/83/HEPU/106; BRIS/87/HEPU/713; BRIS/89/HEPU/1003; BRIS/92/HEPU/1541; BRIS/92/HEPU/1590; BRIS/92/HEPU/2443; BRIS/93/HEPU/1706), Malaysia (KL/92/IMR/1106) and Afghanistan (WB), a cat from Australia (BAC2), a sheep from Canada (OAS1) and a sulphur-crested cockatoo from Australia (BRIS/95/HEPU/2041) represented assemblage A (sub-assemblage AI-1, AI-2 or AII-2); isolates (n510) from humans from Australia (BRIS/91/HEPU/1279; BRIS/92/HEPU/2342; BRIS/92/HEPU/2348; BRIS/93/HEPU/1638; BRIS/93/HEPU/1653; BRIS/93/HEPU/1705; BRIS/93/HEPU/1718; BRIS/93/HEPU/1727), Papua New Guinea (BRIS/92/HEPU/1487) and Canada (H7) represented assemblage B (sub-assemblage BIV) and an isolate from cattle from Australia (BRIS/92/HEPU/1709) had a match to assemblage E. Isolate BRIS/90/HEPU/1229 from a human from Australia was shown to represent a mixed population of assemblages A and B. These barcoded isolates (including stocks and derived lines) now allow direct comparisons of experimental data among laboratories and represent a massive resource for transcriptomic, proteomic, metabolic and functional genomic studies using advanced molecular technologies.
    Matched MeSH terms: Automatic Data Processing
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links