Displaying publications 81 - 100 of 255 in total

Abstract:
Sort:
  1. Ali T, Jan S, Alkhodre A, Nauman M, Amin M, Siddiqui MS
    PeerJ Comput Sci, 2019;5:e216.
    PMID: 33816869 DOI: 10.7717/peerj-cs.216
    Conventional paper currency and modern electronic currency are two important modes of transactions. In several parts of the world, conventional methodology has clear precedence over its electronic counterpart. However, the identification of forged currency paper notes is now becoming an increasingly crucial problem because of the new and improved tactics employed by counterfeiters. In this paper, a machine assisted system-dubbed DeepMoney-is proposed which has been developed to discriminate fake notes from genuine ones. For this purpose, state-of-the-art models of machine learning called Generative Adversarial Networks (GANs) are employed. GANs use unsupervised learning to train a model that can then be used to perform supervised predictions. This flexibility provides the best of both worlds by allowing unlabelled data to be trained on whilst still making concrete predictions. This technique was applied to Pakistani banknotes. State-of-the-art image processing and feature recognition techniques were used to design the overall approach of a valid input. Augmented samples of images were used in the experiments which show that a high-precision machine can be developed to recognize genuine paper money. An accuracy of 80% has been achieved. The code is available as an open source to allow others to reproduce and build upon the efforts already made.
    Matched MeSH terms: Image Processing, Computer-Assisted
  2. Sabarudin A, Tiau YJ
    Quant Imaging Med Surg, 2013 Feb;3(1):43-8.
    PMID: 23483085 DOI: 10.3978/j.issn.2223-4292.2013.02.07
    This study is designed to compare and evaluate the diagnostic image quality of dental panoramic radiography between conventional and digital systems. Fifty-four panoramic images were collected and divided into three groups consisting of conventional, digital with and without post processing image. Each image was printed out and scored subjectively by two experienced dentists who were blinded to the exposure parameters and system protocols. The evaluation covers of anatomical coverage and structures, density and image contrast. The overall image quality score revealed that digital panoramic with post-processing scored the highest of 3.45±0.19, followed by digital panoramic system without post-processing and conventional panoramic system with corresponding scores of 3.33±0.33 and 2.06±0.40. In conclusion, images produced by digital panoramic system are better in diagnostic image quality than that from conventional panoramic system. Digital post-processing visualization can improve diagnostic quality significantly in terms of radiographic density and contrast.
    Matched MeSH terms: Image Processing, Computer-Assisted
  3. Ebrahimkhani S, Jaward MH, Cicuttini FM, Dharmaratne A, Wang Y, de Herrera AGS
    Artif Intell Med, 2020 06;106:101851.
    PMID: 32593389 DOI: 10.1016/j.artmed.2020.101851
    In this paper, we review the state-of-the-art approaches for knee articular cartilage segmentation from conventional techniques to deep learning (DL) based techniques. Knee articular cartilage segmentation on magnetic resonance (MR) images is of great importance in early diagnosis of osteoarthritis (OA). Besides, segmentation allows estimating the articular cartilage loss rate which is utilised in clinical practice for assessing the disease progression and morphological changes. It has been traditionally applied in quantifying longitudinal knee OA progression pattern to detect and assess the articular cartilage thickness and volume. Topics covered include various image processing algorithms and major features of different segmentation techniques, feature computations and the performance evaluation metrics. This paper is intended to provide researchers with a broad overview of the currently existing methods in the field, as well as to highlight the shortcomings and potential considerations in the application at clinical practice. The survey showed that state-of-the-art techniques based on DL outperform the other segmentation methods. The analysis of the existing methods reveals that integration of DL-based algorithms with other traditional model-based approaches has achieved the best results (mean Dice similarity coefficient (DSC) between 85.8% and 90%).
    Matched MeSH terms: Image Processing, Computer-Assisted
  4. Tan M, Al-Shabi M, Chan WY, Thomas L, Rahmat K, Ng KH
    Med Biol Eng Comput, 2021 Feb;59(2):355-367.
    PMID: 33447988 DOI: 10.1007/s11517-021-02313-1
    This study objectively evaluates the similarity between standard full-field digital mammograms and two-dimensional synthesized digital mammograms (2DSM) in a cohort of women undergoing mammography. Under an institutional review board-approved data collection protocol, we retrospectively analyzed 407 women with digital breast tomosynthesis (DBT) and full-field digital mammography (FFDM) examinations performed from September 1, 2014, through February 29, 2016. Both FFDM and 2DSM images were used for the analysis, and 3216 available craniocaudal (CC) and mediolateral oblique (MLO) view mammograms altogether were included in the dataset. We analyzed the mammograms using a fully automated algorithm that computes 152 structural similarity, texture, and mammographic density-based features. We trained and developed two different global mammographic image feature analysis-based breast cancer detection schemes for 2DSM and FFDM images, respectively. The highest structural similarity features were obtained on the coarse Weber Local Descriptor differential excitation texture feature component computed on the CC view images (0.8770) and MLO view images (0.8889). Although the coarse structures are similar, the global mammographic image feature-based cancer detection scheme trained on 2DSM images outperformed the corresponding scheme trained on FFDM images, with area under a receiver operating characteristic curve (AUC) = 0.878 ± 0.034 and 0.756 ± 0.052, respectively. Consequently, further investigation is required to examine whether DBT can replace FFDM as a standalone technique, especially for the development of automated objective-based methods.
    Matched MeSH terms: Image Processing, Computer-Assisted
  5. Al-Ghaili, Abbas M., Syamsiah Mashohor, Abdul Rahman Ramli, Alyani Ismail
    MyJurnal
    Recently, license plate detection has been used in many applications especially in transportation systems. Many methods have been proposed in order to detect license plates, but most of them work under restricted conditions such as fixed illumination, stationary background, and high resolution images. License plate detection plays an important role in car license plate recognition systems because it affects the accuracy and processing time of the system. This work aims to build a Car License Plate Detection (CLPD) system at a lower cost of its hardware devices and with less complexity of algorithms’ design, and then compare its performance with the local CAR Plate Extraction Technology (CARPET). As Malaysian plates have special design and they differ from other international plates, this work tries to compare two likely-design methods. The images are taken using a web camera for both the systems. One of the most important contributions in this paper is that the proposed CLPD method uses Vertical Edge Detection Algorithm (VEDA) to extract the vertical edges of plates. The proposed CLPD method can work to detect the region of car license plates. The method shows the total time of processing one 352x288 image is 47.7 ms, and it meets the requirement of real time processing. Under the experiment datasets, which were taken from real scenes, 579 out of 643 images were successfully detected. Meanwhile, the average accuracy of locating car license plate was 90%. In this work, a comparison between CARPET and the proposed CLPD method for the same tested images was done in terms of detection rate and efficiency. The results indicated that the detection rate was 92% and 84% for the CLPD method and CARPET, respectively. The results also showed that the CLPD method could work using dark images to detect license plates, whereas CARPET had failed to do so.
    Matched MeSH terms: Image Processing, Computer-Assisted
  6. Khairul Anuar Mohd Salleh, Ab. Razak Hamzah, Wan Muhammad Saridan Wan Hassan
    MyJurnal
    Developments of computer technology and image processing have shifted conventional industrial radiography application to industrial digital radiography (IDR) system. In this study, two types of IDR modules for non destructive testing (NDT), namely drum- and laser- type film digitizer with 50 μm pixel pitch have been evaluated for NDT applications. The modulation transfer function (MTF) and noise power spectrum (NPS) measurement were adapted to evaluate the image quality of IDR images. Results shown the averaged MTF for drum- and laser- type film digitizer at 20% modulation were 6.15 cycles/mm and 6.55 cycles/mm respectively. For NPS measurement and calculation, the result obtained shows that drum type film digitizer produced higher noise then laser type film digitizer. The study shows that the laser type film digitizer is the best system to be used for film digitization purposes because the MTF result shows that it modulates better than drum type and has the lowest and stable NPS.
    Matched MeSH terms: Image Processing, Computer-Assisted
  7. Hassan, M.K., Eko, N.F.H., Shafie, S.
    MyJurnal
    Tuberculosis (TB) is the second biggest killer disease after HIV. Therefore, early detection is vital to
    prevent its outbreak. This paper looked at an automated TB bacteria counting using Image Processing technique and Matlab Graphical User Interface (GUI) for analysing the results. The image processing algorithms used in this project involved Image Acquisition, Image Pre-processing and Image Segmentation. In order to separate any overlap between the TB bacteria, Watershed Segmentation techniques was proposed and implemented. There are two techniques in Watershed Segmentation which is Watershed Distance Transform Segmentation and Marker Based Watershed Segmentation. Marker Based Watershed Segmentation had 81.08 % accuracy compared with Distance Transform with an accuracy of 59.06%. These accuracies were benchmarked with manual inspection. It was observed that Distance Transform Watershed Segmentation has disadvantages over segmentation and produce inaccurate results. Automatic counting of TB bacteria algorithms have also been proven to be less time consuming, contains less human error and consumes less man-power.
    Matched MeSH terms: Image Processing, Computer-Assisted
  8. Ahmed O, Yushou Song
    Sains Malaysiana, 2018;47:1883-1890.
    X-ray computed tomography (XCT) became an important instrument for quality assurance in industry products as a
    non-destructive testing tool for inspection, evaluation, analysis and dimensional metrology. Thus, a high-quality image
    is required. Due to the polychromatic nature of X-ray energy in XCT, this leads to errors in attenuation coefficient
    which is generally known as beam hardening artifact. This leads to a distortion or blurring-like cupping and streak in
    the reconstruction images, where a significant decrease in imaging quality is observed. In this paper, recent research
    publications regarding common practical correction methods that were adopted to improve an imaging quality have been
    discussed. It was observed from the discussion and evaluation, that a problem behind beam hardening reduction for the
    multi-materials object, especially in the absence of prior information about X-ray spectrum and material characterizations
    would be a significant research contribution, if the correction could be achieved without the need to perform forward
    projections and multiple reconstructions.
    Matched MeSH terms: Image Processing, Computer-Assisted
  9. Mustafa S, Iqbal MW, Rana TA, Jaffar A, Shiraz M, Arif M, et al.
    Comput Intell Neurosci, 2022;2022:4348235.
    PMID: 35909861 DOI: 10.1155/2022/4348235
    Malignant melanoma is considered one of the deadliest skin diseases if ignored without treatment. The mortality rate caused by melanoma is more than two times that of other skin malignancy diseases. These facts encourage computer scientists to find automated methods to discover skin cancers. Nowadays, the analysis of skin images is widely used by assistant physicians to discover the first stage of the disease automatically. One of the challenges the computer science researchers faced when developing such a system is the un-clarity of the existing images, such as noise like shadows, low contrast, hairs, and specular reflections, which complicates detecting the skin lesions in that images. This paper proposes the solution to the problem mentioned earlier using the active contour method. Still, seed selection in the dynamic contour method has the main drawback of where it should start the segmentation process. This paper uses Gaussian filter-based maximum entropy and morphological processing methods to find automatic seed points for active contour. By incorporating this, it can segment the lesion from dermoscopic images automatically. Our proposed methodology tested quantitative and qualitative measures on standard dataset dermis and used to test the proposed method's reliability which shows encouraging results.
    Matched MeSH terms: Image Processing, Computer-Assisted
  10. Vineth Ligi S, Kundu SS, Kumar R, Narayanamoorthi R, Lai KW, Dhanalakshmi S
    J Healthc Eng, 2022;2022:5998042.
    PMID: 35251572 DOI: 10.1155/2022/5998042
    Pulmonary medical image analysis using image processing and deep learning approaches has made remarkable achievements in the diagnosis, prognosis, and severity check of lung diseases. The epidemic of COVID-19 brought out by the novel coronavirus has triggered a critical need for artificial intelligence assistance in diagnosing and controlling the disease to reduce its effects on people and global economies. This study aimed at identifying the various COVID-19 medical imaging analysis models proposed by different researchers and featured their merits and demerits. It gives a detailed discussion on the existing COVID-19 detection methodologies (diagnosis, prognosis, and severity/risk detection) and the challenges encountered for the same. It also highlights the various preprocessing and post-processing methods involved to enhance the detection mechanism. This work also tries to bring out the different unexplored research areas that are available for medical image analysis and how the vast research done for COVID-19 can advance the field. Despite deep learning methods presenting high levels of efficiency, some limitations have been briefly described in the study. Hence, this review can help understand the utilization and pros and cons of deep learning in analyzing medical images.
    Matched MeSH terms: Image Processing, Computer-Assisted
  11. Soleymani A, Nordin MJ, Sundararajan E
    ScientificWorldJournal, 2014;2014:536930.
    PMID: 25258724 DOI: 10.1155/2014/536930
    The rapid evolution of imaging and communication technologies has transformed images into a widespread data type. Different types of data, such as personal medical information, official correspondence, or governmental and military documents, are saved and transmitted in the form of images over public networks. Hence, a fast and secure cryptosystem is needed for high-resolution images. In this paper, a novel encryption scheme is presented for securing images based on Arnold cat and Henon chaotic maps. The scheme uses Arnold cat map for bit- and pixel-level permutations on plain and secret images, while Henon map creates secret images and specific parameters for the permutations. Both the encryption and decryption processes are explained, formulated, and graphically presented. The results of security analysis of five different images demonstrate the strength of the proposed cryptosystem against statistical, brute force and differential attacks. The evaluated running time for both encryption and decryption processes guarantee that the cryptosystem can work effectively in real-time applications.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*; Image Processing, Computer-Assisted/standards
  12. Sim KS, Lai MA, Tso CP, Teo CC
    J Med Syst, 2011 Feb;35(1):39-48.
    PMID: 20703587 DOI: 10.1007/s10916-009-9339-9
    A novel technique to quantify the signal-to-noise ratio (SNR) of magnetic resonance images is developed. The image SNR is quantified by estimating the amplitude of the signal spectrum using the autocorrelation function of just one single magnetic resonance image. To test the performance of the quantification, SNR measurement data are fitted to theoretically expected curves. It is shown that the technique can be implemented in a highly efficient way for the magnetic resonance imaging system.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*; Image Processing, Computer-Assisted/statistics & numerical data
  13. Idroas M, Rahim RA, Green RG, Ibrahim MN, Rahiman MH
    Sensors (Basel), 2010;10(10):9512-28.
    PMID: 22163423 DOI: 10.3390/s101009512
    This research investigates the use of charge coupled device (abbreviated as CCD) linear image sensors in an optical tomographic instrumentation system used for sizing particles. The measurement system, consisting of four CCD linear image sensors are configured around an octagonal shaped flow pipe for a four projections system is explained. The four linear image sensors provide 2,048 pixel imaging with a pixel size of 14 micron × 14 micron, hence constituting a high-resolution system. Image reconstruction for a four-projection optical tomography system is also discussed, where a simple optical model is used to relate attenuation due to variations in optical density, [R], within the measurement section. Expressed in matrix form this represents the forward problem in tomography [S] [R] = [M]. In practice, measurements [M] are used to estimate the optical density distribution by solving the inverse problem [R] = [S](-1)[M]. Direct inversion of the sensitivity matrix, [S], is not possible and two approximations are considered and compared-the transpose and the pseudo inverse sensitivity matrices.
    Matched MeSH terms: Image Processing, Computer-Assisted/instrumentation*; Image Processing, Computer-Assisted/methods*
  14. Tiong KH, Chang JK, Pathmanathan D, Hidayatullah Fadlullah MZ, Yee PS, Liew CS, et al.
    Biotechniques, 2018 12;65(6):322-330.
    PMID: 30477327 DOI: 10.2144/btn-2018-0072
    We describe a novel automated cell detection and counting software, QuickCount® (QC), designed for rapid quantification of cells. The Bland-Altman plot and intraclass correlation coefficient (ICC) analyses demonstrated strong agreement between cell counts from QC to manual counts (mean and SD: -3.3 ± 4.5; ICC = 0.95). QC has higher recall in comparison to ImageJauto, CellProfiler and CellC and the precision of QC, ImageJauto, CellProfiler and CellC are high and comparable. QC can precisely delineate and count single cells from images of different cell densities with precision and recall above 0.9. QC is unique as it is equipped with real-time preview while optimizing the parameters for accurate cell count and needs minimum hands-on time where hundreds of images can be analyzed automatically in a matter of milliseconds. In conclusion, QC offers a rapid, accurate and versatile solution for large-scale cell quantification and addresses the challenges often faced in cell biology research.
    Matched MeSH terms: Image Processing, Computer-Assisted/economics; Image Processing, Computer-Assisted/methods*
  15. Mehdy MM, Ng PY, Shair EF, Saleh NIM, Gomes C
    Comput Math Methods Med, 2017;2017:2610628.
    PMID: 28473865 DOI: 10.1155/2017/2610628
    Medical imaging techniques have widely been in use in the diagnosis and detection of breast cancer. The drawback of applying these techniques is the large time consumption in the manual diagnosis of each image pattern by a professional radiologist. Automated classifiers could substantially upgrade the diagnosis process, in terms of both accuracy and time requirement by distinguishing benign and malignant patterns automatically. Neural network (NN) plays an important role in this respect, especially in the application of breast cancer detection. Despite the large number of publications that describe the utilization of NN in various medical techniques, only a few reviews are available that guide the development of these algorithms to enhance the detection techniques with respect to specificity and sensitivity. The purpose of this review is to analyze the contents of recently published literature with special attention to techniques and states of the art of NN in medical imaging. We discuss the usage of NN in four different medical imaging applications to show that NN is not restricted to few areas of medicine. Types of NN used, along with the various types of feeding data, have been reviewed. We also address hybrid NN adaptation in breast cancer detection.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*; Image Processing, Computer-Assisted/standards
  16. Abdul Rahim R, Leong LC, Chan KS, Rahiman MH, Pang JF
    ISA Trans, 2008 Jan;47(1):3-14.
    PMID: 17709106
    This paper presents the implementing multiple fan beam projection technique using optical fibre sensors for a tomography system. From the dynamic experiment of solid/gas flow using plastic beads in a gravity flow rig, the designed optical fibre sensors are reliable in measuring the mass flow rate below 40% of flow. Another important matter that has been discussed is the image processing rate or IPR. Generally, the applied image reconstruction algorithms, the construction of the sensor and also the designed software are considered to be reliable and suitable to perform real-time image reconstruction and mass flow rate measurements.
    Matched MeSH terms: Image Processing, Computer-Assisted
  17. Jahanirad M, Wahab AW, Anuar NB
    Forensic Sci Int, 2016 May;262:242-75.
    PMID: 27060542 DOI: 10.1016/j.forsciint.2016.03.035
    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution.
    Matched MeSH terms: Image Processing, Computer-Assisted
  18. Ibrahim WM
    J Prosthet Dent, 1996 Jul;76(1):104.
    PMID: 8814640
    Matched MeSH terms: Image Processing, Computer-Assisted
  19. Jing W, Tao H, Rahman MA, Kabir MN, Yafeng L, Zhang R, et al.
    Work, 2021;68(3):923-934.
    PMID: 33612534 DOI: 10.3233/WOR-203426
    BACKGROUND: Human-Computer Interaction (HCI) is incorporated with a variety of applications for input processing and response actions. Facial recognition systems in workplaces and security systems help to improve the detection and classification of humans based on the vision experienced by the input system.

    OBJECTIVES: In this manuscript, the Robotic Facial Recognition System using the Compound Classifier (RERS-CC) is introduced to improve the recognition rate of human faces. The process is differentiated into classification, detection, and recognition phases that employ principal component analysis based learning. In this learning process, the errors in image processing based on the extracted different features are used for error classification and accuracy improvements.

    RESULTS: The performance of the proposed RERS-CC is validated experimentally using the input image dataset in MATLAB tool. The performance results show that the proposed method improves detection and recognition accuracy with fewer errors and processing time.

    CONCLUSION: The input image is processed with the knowledge of the features and errors that are observed with different orientations and time instances. With the help of matching dataset and the similarity index verification, the proposed method identifies precise human face with augmented true positives and recognition rate.

    Matched MeSH terms: Image Processing, Computer-Assisted
  20. Chitturi V, Farrukh N
    J Electr Bioimpedance, 2019 Jan;10(1):96-102.
    PMID: 33584889 DOI: 10.2478/joeb-2019-0014
    Electrical impedance tomography (EIT) has a large potential as a two dimensional imaging technique and is gaining attention among researchers across various fields of engineering. Beamforming techniques stem from the array signal processing field and is used for spatial filtering of array data to evaluate the location of objects. In this work the circular electrodes are treated as an array of sensors and beamforming technique is used to localize the object(s) in an electrical field. The conductivity distributions within a test tank is obtained by an EIT system in terms of electrode voltages. These voltages are then interpolated using elliptic partial differential equations. Finally, a narrowband beamformer detects the peak in the output response signal to localize the test object(s). Test results show that the beamforming technique can be used as a secondary method that may provide complementary information about accurate position of the test object(s) using an eight electrode EIT system. This method could possibly open new avenues for spatial EIT data filtering techniques with an understanding that the inverse problem is more likely considered here as a source localization algorithm instead as an image reconstruction algorithm.
    Matched MeSH terms: Image Processing, Computer-Assisted
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links