Displaying all 16 publications

Abstract:
Sort:
  1. Lim YC, Cheong SK
    Malays J Pathol, 1992 Jun;14(1):13-7.
    PMID: 1469912
    A system for computerising histopathology records developed in-house using dBASE IV on IBM-compatible microcomputers in a local area network is described. The software package uses a horizontal main menu bar with associated pull-down submenus as interface between the machine and the user. It is very easy to use. The package provides options for selecting databases by years, entering/editing records, browsing data, making multi-characteristics searches/retrievals, printing data, and maintaining databases that includes backing-up and repairing corrupted databases.
    Matched MeSH terms: Database Management Systems*
  2. Mahdin H, Abawajy J
    Sensors (Basel), 2011;11(10):9863-77.
    PMID: 22163730 DOI: 10.3390/s111009863
    Radio frequency identification (RFID) systems are emerging as the primary object identification mechanism, especially in supply chain management. However, RFID naturally generates a large amount of duplicate readings. Removing these duplicates from the RFID data stream is paramount as it does not contribute new information to the system and wastes system resources. Existing approaches to deal with this problem cannot fulfill the real time demands to process the massive RFID data stream. We propose a data filtering approach that efficiently detects and removes duplicate readings from RFID data streams. Experimental results show that the proposed approach offers a significant improvement as compared to the existing approaches.
    Matched MeSH terms: Database Management Systems*
  3. Cheah YN, Abidi SS
    PMID: 11187672
    The abundance and transient nature to healthcare knowledge has rendered it difficult to acquire with traditional knowledge acquisition methods. In this paper, we propose a Knowledge Management approach, through the use of scenarios, as a mean to acquire and represent tacit healthcare knowledge. This proposition is based on the premise that tacit knowledge is best manifested in atypical situations. We also provide an overview of the representational scheme and novel acquisition mechanism of scenarios.
    Matched MeSH terms: Database Management Systems
  4. Che Mustapha Yusuf, J., Mohd Su’ud, M., Boursier, P., Muhammad, A.
    MyJurnal
    Finding relevant disaster data from a huge metadata overhead often results in frustrating search
    experiences caused by unclear access points, ambiguous search methods, unsuitable metadata, and long response times. More frequently, semantic relation between the retrieved objects is neglected. This paper presents a system architecture that makes use of ontologies in order to enable semantic metadata descriptions for gathering and integrating multi-format documents in the context of disaster management. After a brief discussion on the challenges of the integration process, the Multi-format Information Retrieval, Integration and Presentation (MIRIP) architecture is presented. A specific approach for ontology development and mapping process is introduced in order to semantically associate user’s query and documents metadata. An ontology model approach was designed to follow inspirational and collaborative approaches with top-down to bottom-up implementation. A prototype of the integrated disaster management information system is currently under development, based on the architecture that is presented in this paper.
    Matched MeSH terms: Database Management Systems
  5. Liu Yang, Xue Bai, Yinjie Hu, Qiqi Wang, Jun Deng
    Sains Malaysiana, 2017;46:2195-2204.
    The combination of geographic information system and mineral energy data management is helpful to promote the study of mineral energy and its ecological damage and environmental pollution caused by its development and utilization, which has important application value. The Trace Elements in Coal of China Database Management System (TECC) is established in this paper, applying the techniques of B/S three-layer structure, Oracle database, AJAX and WebGIS. TECC is the first database system which aims at managing the data of trace elements in coal in China. It includes data management and analysis module, document management module, trace elements in coal data maintenance module and authority management module. The data entry specification is put forward in the present study and the spatial data is included in TECC system. The system achieves the functions of data query, analysis, management, maintenance and map browsing, thematic map drawing as well as satellite video display, which lay the foundation for the analysis of large data of trace elements in coal. It is a practical platform for the acquisition, management, exchange and sharing of trace element research and geochemical research data of coal.
    Matched MeSH terms: Database Management Systems
  6. Zare MR, Mueen A, Seng WC
    J Digit Imaging, 2014 Feb;27(1):77-89.
    PMID: 24092327 DOI: 10.1007/s10278-013-9637-0
    The demand for automatically classification of medical X-ray images is rising faster than ever. In this paper, an approach is presented to gain high accuracy rate for those classes of medical database with high ratio of intraclass variability and interclass similarities. The classification framework was constructed via annotation using the following three techniques: annotation by binary classification, annotation by probabilistic latent semantic analysis, and annotation using top similar images. Next, final annotation was constructed by applying ranking similarity on annotated keywords made by each technique. The final annotation keywords were then divided into three levels according to the body region, specific bone structure in body region as well as imaging direction. Different weights were given to each level of the keywords; they are then used to calculate the weightage for each category of medical images based on their ground truth annotation. The weightage computed from the generated annotation of query image was compared with the weightage of each category of medical images, and then the query image would be assigned to the category with closest weightage to the query image. The average accuracy rate reported is 87.5 %.
    Matched MeSH terms: Database Management Systems/statistics & numerical data*
  7. Abdullah AA, Altaf-Ul-Amin M, Ono N, Sato T, Sugiura T, Morita AH, et al.
    Biomed Res Int, 2015;2015:139254.
    PMID: 26495281 DOI: 10.1155/2015/139254
    Volatile organic compounds (VOCs) are small molecules that exhibit high vapor pressure under ambient conditions and have low boiling points. Although VOCs contribute only a small proportion of the total metabolites produced by living organisms, they play an important role in chemical ecology specifically in the biological interactions between organisms and ecosystems. VOCs are also important in the health care field as they are presently used as a biomarker to detect various human diseases. Information on VOCs is scattered in the literature until now; however, there is still no available database describing VOCs and their biological activities. To attain this purpose, we have developed KNApSAcK Metabolite Ecology Database, which contains the information on the relationships between VOCs and their emitting organisms. The KNApSAcK Metabolite Ecology is also linked with the KNApSAcK Core and KNApSAcK Metabolite Activity Database to provide further information on the metabolites and their biological activities. The VOC database can be accessed online.
    Matched MeSH terms: Database Management Systems*
  8. Cheah YN, Abidi SS
    PMID: 10724990
    In this paper we suggest that the healthcare enterprise needs to be more conscious of its vast knowledge resources vis-à-vis the exploitation of knowledge management techniques to efficiently manage its knowledge. The development of healthcare enterprise memory is suggested as a solution, together with a novel approach advocating the operationalisation of healthcare enterprise memories leading to the modelling of healthcare processes for strategic planning. As an example, we present a simulation of Service Delivery Time in a hospital's OPD.
    Matched MeSH terms: Database Management Systems*
  9. Kedung Fletcher, Anding Nyuak, Tan Phei Yee
    MyJurnal
    There is lacking technology application in black pepper farming to automate daily routine activities in monitoring black pepper vines growth and nutrient need. With the revolution of Industry 4.0 (IR4.0), and tremendous improvement in the internet of things (IoT), the application of precision agriculture to pepper farming is a thing to consider for its benefit. This paper to explore the use of IoT to monitor fertilizer requirement for pepper vines using pH sensor. The pH sensor attached to Raspberry Pi 3 will be collecting the data and forwarding it to the cloud database for farmer reference and take decision based on data presented in form of a digital report from the database. The Python environment provides the space for coding in Raspberry Pi. SQL and PHP software is used to design the user interface and data management in the relational database management system. The information about pH provides a better understanding of how pH parameter affects the growth of pepper vines. The farmer will be able to access the information anywhere and anytime. Therefore, our proposed system will greatly help the pepper farmers in Sarawak in managing the usage of fertilizer as a way to minimize farm inputs, thus increase their profit.
    Matched MeSH terms: Database Management Systems
  10. Wan Zaki WMD, Mat Daud M, Abdani SR, Hussain A, Mutalib HA
    Comput Methods Programs Biomed, 2018 Feb;154:71-78.
    PMID: 29249348 DOI: 10.1016/j.cmpb.2017.10.026
    BACKGROUND AND BJECTIVE: Pterygium is an ocular disease caused by fibrovascular tissue encroachment onto the corneal region. The tissue may cause vision blurring if it grows into the pupil region. In this study, we propose an automatic detection method to differentiate pterygium from non-pterygium (normal) cases on the basis of frontal eye photographed images, also known as anterior segment photographed images.

    METHODS: The pterygium screening system was tested on two normal eye databases (UBIRIS and MILES) and two pterygium databases (Australia Pterygium and Brazil Pterygium). This system comprises four modules: (i) a preprocessing module to enhance the pterygium tissue using HSV-Sigmoid; (ii) a segmentation module to differentiate the corneal region and the pterygium tissue; (iii) a feature extraction module to extract corneal features using circularity ratio, Haralick's circularity, eccentricity, and solidity; and (iv) a classification module to identify the presence or absence of pterygium. System performance was evaluated using support vector machine (SVM) and artificial neural network.

    RESULTS: The three-step frame differencing technique was introduced in the corneal segmentation module. The output image successfully covered the region of interest with an average accuracy of 0.9127. The performance of the proposed system using SVM provided the most promising results of 88.7%, 88.3%, and 95.6% for sensitivity, specificity, and area under the curve, respectively.

    CONCLUSION: A basic platform for computer-aided pterygium screening was successfully developed using the proposed modules. The proposed system can classify pterygium and non-pterygium cases reasonably well. In our future work, a standard grading system will be developed to identify the severity of pterygium cases. This system is expected to increase the awareness of communities in rural areas on pterygium.

    Matched MeSH terms: Database Management Systems
  11. Othman RM, Deris S, Illias RM
    J Biomed Inform, 2008 Feb;41(1):65-81.
    PMID: 17681495
    A genetic similarity algorithm is introduced in this study to find a group of semantically similar Gene Ontology terms. The genetic similarity algorithm combines semantic similarity measure algorithm with parallel genetic algorithm. The semantic similarity measure algorithm is used to compute the similitude strength between the Gene Ontology terms. Then, the parallel genetic algorithm is employed to perform batch retrieval and to accelerate the search in large search space of the Gene Ontology graph. The genetic similarity algorithm is implemented in the Gene Ontology browser named basic UTMGO to overcome the weaknesses of the existing Gene Ontology browsers which use a conventional approach based on keyword matching. To show the applicability of the basic UTMGO, we extend its structure to develop a Gene Ontology -based protein sequence annotation tool named extended UTMGO. The objective of developing the extended UTMGO is to provide a simple and practical tool that is capable of producing better results and requires a reasonable amount of running time with low computing cost specifically for offline usage. The computational results and comparison with other related tools are presented to show the effectiveness of the proposed algorithm and tools.
    Matched MeSH terms: Database Management Systems*
  12. Usin MF, Ramesh P, Lopez CG
    Malays J Pathol, 2004 Jun;26(1):43-8.
    PMID: 16190106
    Event reporting can provide data to study the failure points of an organization's work process. As part of the ongoing efforts to improve transfusion safety, a Medical Event Reporting System Transfusion Medicine, (MERS - TM) as designed by Kaplan et al was implemented in the Transfusion Medicine Unit of the University Malaya Medical Centre to provide a standardized means of organized data collection and analysis of transfusion errors, adverse events and near misses. An event reporting form was designed to detect, identify, classify and study the frequency and pattern of events occurring in the unit. Events detected were classified according to Eihdhoven Classification model (ECM) adopted for MERS - TM. Since our system reported all events, we called it Event Reporting System - Transfusion Medicine (ERS-TM). Data was collected and analyzed from the reporting forms for a period of five months from January 15th to June 15th 2002. The initial half of the period was a process of evaluation during which 118 events were reported, coded, analyzed and corrective measures adopted to prevent the recurrence of the same event. The latter half saw the reporting of 122 events following the adoption of corrective measures. There was a reduction in the occurrence of some events and an increase in others, which were mainly beyond the organization's control. A longer period of evaluation is necessary to identify the underlying contributory causes that can be useful to develop plans for corrective and preventive action and thereby reduce the rate of recurrence of errors through proper training and adoption of just culture.
    Matched MeSH terms: Database Management Systems*
  13. Firdaus Raih M, Ahmad HA, Sharum MY, Azizi N, Mohamed R
    Appl. Bioinformatics, 2005;4(2):147-50.
    PMID: 16128617
    Bacterial proteases are an important group of enzymes that have very diverse biochemical and cellular functions. Proteases from prokaryotic sources also have a wide range of uses, either in medicine as pathogenic factors or in industry and therapeutics. ProLysED (Prokaryotic Lysis Enzymes Database), our meta-server integrated database of bacterial proteases, is a useful, albeit very niche, resource. The features include protease classification browsing and searching, organism-specific protease browsing, molecular information and visualisation of protease structures from the Protein Data Bank (PDB) as well as predicted protease structures.
    Matched MeSH terms: Database Management Systems*
  14. Mueen A, Zainuddin R, Baba MS
    J Digit Imaging, 2008 Sep;21(3):290-5.
    PMID: 17846834
    Image retrieval at the semantic level mostly depends on image annotation or image classification. Image annotation performance largely depends on three issues: (1) automatic image feature extraction; (2) a semantic image concept modeling; (3) algorithm for semantic image annotation. To address first issue, multilevel features are extracted to construct the feature vector, which represents the contents of the image. To address second issue, domain-dependent concept hierarchy is constructed for interpretation of image semantic concepts. To address third issue, automatic multilevel code generation is proposed for image classification and multilevel image annotation. We make use of the existing image annotation to address second and third issues. Our experiments on a specific domain of X-ray images have given encouraging results.
    Matched MeSH terms: Database Management Systems
  15. Teoh AB, Goh A, Ngo DC
    IEEE Trans Pattern Anal Mach Intell, 2006 Dec;28(12):1892-901.
    PMID: 17108365
    Biometric analysis for identity verification is becoming a widespread reality. Such implementations necessitate large-scale capture and storage of biometric data, which raises serious issues in terms of data privacy and (if such data is compromised) identity theft. These problems stem from the essential permanence of biometric data, which (unlike secret passwords or physical tokens) cannot be refreshed or reissued if compromised. Our previously presented biometric-hash framework prescribes the integration of external (password or token-derived) randomness with user-specific biometrics, resulting in bitstring outputs with security characteristics (i.e., noninvertibility) comparable to cryptographic ciphers or hashes. The resultant BioHashes are hence cancellable, i.e., straightforwardly revoked and reissued (via refreshed password or reissued token) if compromised. BioHashing furthermore enhances recognition effectiveness, which is explained in this paper as arising from the Random Multispace Quantization (RMQ) of biometric and external random inputs.
    Matched MeSH terms: Database Management Systems
  16. Khuan LY, Bister M, Blanchfield P, Salleh YM, Ali RA, Chan TH
    Australas Phys Eng Sci Med, 2006 Jun;29(2):216-28.
    PMID: 16845928
    Increased inter-equipment connectivity coupled with advances in Web technology allows ever escalating amounts of physiological data to be produced, far too much to be displayed adequately on a single computer screen. The consequence is that large quantities of insignificant data will be transmitted and reviewed. This carries an increased risk of overlooking vitally important transients. This paper describes a technique to provide an integrated solution based on a single algorithm for the efficient analysis, compression and remote display of long-term physiological signals with infrequent short duration, yet vital events, to effect a reduction in data transmission and display cluttering and to facilitate reliable data interpretation. The algorithm analyses data at the server end and flags significant events. It produces a compressed version of the signal at a lower resolution that can be satisfactorily viewed in a single screen width. This reduced set of data is initially transmitted together with a set of 'flags' indicating where significant events occur. Subsequent transmissions need only involve transmission of flagged data segments of interest at the required resolution. Efficient processing and code protection with decomposition alone is novel. The fixed transmission length method ensures clutter-less display, irrespective of the data length. The flagging of annotated events in arterial oxygen saturation, electroencephalogram and electrocardiogram illustrates the generic property of the algorithm. Data reduction of 87% to 99% and improved displays are demonstrated.
    Matched MeSH terms: Database Management Systems
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links