Displaying publications 161 - 180 of 255 in total

Abstract:
Sort:
  1. Ibrahim RW, Hasan AM, Jalab HA
    Comput Methods Programs Biomed, 2018 Sep;163:21-28.
    PMID: 30119853 DOI: 10.1016/j.cmpb.2018.05.031
    BACKGROUND AND OBJECTIVES: The MRI brain tumors segmentation is challenging due to variations in terms of size, shape, location and features' intensity of the tumor. Active contour has been applied in MRI scan image segmentation due to its ability to produce regions with boundaries. The main difficulty that encounters the active contour segmentation is the boundary tracking which is controlled by minimization of energy function for segmentation. Hence, this study proposes a novel fractional Wright function (FWF) as a minimization of energy technique to improve the performance of active contour without edge method.

    METHOD: In this study, we implement FWF as an energy minimization function to replace the standard gradient-descent method as minimization function in Chan-Vese segmentation technique. The proposed FWF is used to find the boundaries of an object by controlling the inside and outside values of the contour. In this study, the objective evaluation is used to distinguish the differences between the processed segmented images and ground truth using a set of statistical parameters; true positive, true negative, false positive, and false negative.

    RESULTS: The FWF as a minimization of energy was successfully implemented on BRATS 2013 image dataset. The achieved overall average sensitivity score of the brain tumors segmentation was 94.8 ± 4.7%.

    CONCLUSIONS: The results demonstrate that the proposed FWF method minimized the energy function more than the gradient-decent method that was used in the original three-dimensional active contour without edge (3DACWE) method.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  2. Iqbal U, Wah TY, Habib Ur Rehman M, Mujtaba G, Imran M, Shoaib M
    J Med Syst, 2018 Nov 05;42(12):252.
    PMID: 30397730 DOI: 10.1007/s10916-018-1107-2
    Electrocardiography (ECG) sensors play a vital role in the Internet of Medical Things, and these sensors help in monitoring the electrical activity of the heart. ECG signal analysis can improve human life in many ways, from diagnosing diseases among cardiac patients to managing the lifestyles of diabetic patients. Abnormalities in heart activities lead to different cardiac diseases and arrhythmia. However, some cardiac diseases, such as myocardial infarction (MI) and atrial fibrillation (Af), require special attention due to their direct impact on human life. The classification of flattened T wave cases of MI in ECG signals and how much of these cases are similar to ST-T changes in MI remain an open issue for researchers. This article presents a novel contribution to classify MI and Af. To this end, we propose a new approach called deep deterministic learning (DDL), which works by combining predefined heart activities with fused datasets. In this research, we used two datasets. The first dataset, Massachusetts Institute of Technology-Beth Israel Hospital, is publicly available, and we exclusively obtained the second dataset from the University of Malaya Medical Center, Kuala Lumpur Malaysia. We first initiated predefined activities on each individual dataset to recognize patterns between the ST-T change and flattened T wave cases and then used the data fusion approach to merge both datasets in a manner that delivers the most accurate pattern recognition results. The proposed DDL approach is a systematic stage-wise methodology that relies on accurate detection of R peaks in ECG signals, time domain features of ECG signals, and fine tune-up of artificial neural networks. The empirical evaluation shows high accuracy (i.e., ≤99.97%) in pattern matching ST-T changes and flattened T waves using the proposed DDL approach. The proposed pattern recognition approach is a significant contribution to the diagnosis of special cases of MI.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  3. Acharya UR, Faust O, Ciaccio EJ, Koh JEW, Oh SL, Tan RS, et al.
    Comput Methods Programs Biomed, 2019 Jul;175:163-178.
    PMID: 31104705 DOI: 10.1016/j.cmpb.2019.04.018
    BACKGROUND AND OBJECTIVE: Complex fractionated atrial electrograms (CFAE) may contain information concerning the electrophysiological substrate of atrial fibrillation (AF); therefore they are of interest to guide catheter ablation treatment of AF. Electrogram signals are shaped by activation events, which are dynamical in nature. This makes it difficult to establish those signal properties that can provide insight into the ablation site location. Nonlinear measures may improve information. To test this hypothesis, we used nonlinear measures to analyze CFAE.

    METHODS: CFAE from several atrial sites, recorded for a duration of 16 s, were acquired from 10 patients with persistent and 9 patients with paroxysmal AF. These signals were appraised using non-overlapping windows of 1-, 2- and 4-s durations. The resulting data sets were analyzed with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA). The data was also quantified via entropy measures.

    RESULTS: RQA exhibited unique plots for persistent versus paroxysmal AF. Similar patterns were observed to be repeated throughout the RPs. Trends were consistent for signal segments of 1 and 2 s as well as 4 s in duration. This was suggestive that the underlying signal generation process is also repetitive, and that repetitiveness can be detected even in 1-s sequences. The results also showed that most entropy metrics exhibited higher measurement values (closer to equilibrium) for persistent AF data. It was also found that Determinism (DET), Trapping Time (TT), and Modified Multiscale Entropy (MMSE), extracted from signals that were acquired from locations at the posterior atrial free wall, are highly discriminative of persistent versus paroxysmal AF data.

    CONCLUSIONS: Short data sequences are sufficient to provide information to discern persistent versus paroxysmal AF data with a significant difference, and can be useful to detect repeating patterns of atrial activation.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  4. Bilal M, Anis H, Khan N, Qureshi I, Shah J, Kadir KA
    Biomed Res Int, 2019;2019:6139785.
    PMID: 31119178 DOI: 10.1155/2019/6139785
    Background: Motion is a major source of blurring and ghosting in recovered MR images. It is more challenging in Dynamic Contrast Enhancement (DCE) MRI because motion effects and rapid intensity changes in contrast agent are difficult to distinguish from each other.

    Material and Methods: In this study, we have introduced a new technique to reduce the motion artifacts, based on data binning and low rank plus sparse (L+S) reconstruction method for DCE MRI. For Data binning, radial k-space data is acquired continuously using the golden-angle radial sampling pattern and grouped into various motion states or bins. The respiratory signal for binning is extracted directly from radially acquired k-space data. A compressed sensing- (CS-) based L+S matrix decomposition model is then used to reconstruct motion sorted DCE MR images. Undersampled free breathing 3D liver and abdominal DCE MR data sets are used to validate the proposed technique.

    Results: The performance of the technique is compared with conventional L+S decomposition qualitatively along with the image sharpness and structural similarity index. Recovered images are visually sharper and have better similarity with reference images.

    Conclusion: L+S decomposition provides improved MR images with data binning as preprocessing step in free breathing scenario. Data binning resolves the respiratory motion by dividing different respiratory positions in multiple bins. It also differentiates the respiratory motion and contrast agent (CA) variations. MR images recovered for each bin are better as compared to the method without data binning.

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  5. Rajagopal H, Mokhtar N, Tengku Mohmed Noor Izam TF, Wan Ahmad WK
    PLoS One, 2020;15(5):e0233320.
    PMID: 32428043 DOI: 10.1371/journal.pone.0233320
    Image Quality Assessment (IQA) is essential for the accuracy of systems for automatic recognition of tree species for wood samples. In this study, a No-Reference IQA (NR-IQA), wood NR-IQA (WNR-IQA) metric was proposed to assess the quality of wood images. Support Vector Regression (SVR) was trained using Generalized Gaussian Distribution (GGD) and Asymmetric Generalized Gaussian Distribution (AGGD) features, which were measured for wood images. Meanwhile, the Mean Opinion Score (MOS) was obtained from the subjective evaluation. This was followed by a comparison between the proposed IQA metric, WNR-IQA, and three established NR-IQA metrics, namely Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE), deepIQA, Deep Bilinear Convolutional Neural Networks (DB-CNN), and five Full Reference-IQA (FR-IQA) metrics known as MSSIM, SSIM, FSIM, IWSSIM, and GMSD. The proposed WNR-IQA metric, BRISQUE, deepIQA, DB-CNN, and FR-IQAs were then compared with MOS values to evaluate the performance of the automatic IQA metrics. As a result, the WNR-IQA metric exhibited a higher performance compared to BRISQUE, deepIQA, DB-CNN, and FR-IQA metrics. Highest quality images may not be routinely available due to logistic factors, such as dust, poor illumination, and hot environment present in the timber industry. Moreover, motion blur could occur due to the relative motion between the camera and the wood slice. Therefore, the advantage of WNR-IQA could be seen from its independency from a "perfect" reference image for the image quality evaluation.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  6. Ab Hamid F, Che Azemin MZ, Salam A, Aminuddin A, Mohd Daud N, Zahari I
    Curr Eye Res, 2016 Jun;41(6):823-31.
    PMID: 26268475 DOI: 10.3109/02713683.2015.1056375
    PURPOSE: The goal of this study was to provide the empirical evidence of fractal dimension as an indirect measure of retinal vasculature density.

    MATERIALS AND METHODS: Two hundred retinal samples of right eye [57.0% females (n = 114) and 43.0% males (n = 86)] were selected from baseline visit. A custom-written software was used for vessel segmentation. Vessel segmentation is the process of transforming two-dimensional color images into binary images (i.e. black and white pixels). The circular area of approximately 2.6 optic disc radii surrounding the center of optic disc was cropped. The non-vessels fragments were removed. FracLac was used to measure the fractal dimension and vessel density of retinal vessels.

    RESULTS: This study suggested that 14.1% of the region of interest (i.e. approximately 2.6 optic disk radii) comprised retinal vessel structure. Using correlation analysis, vessel density measurement and fractal dimension estimation are linearly and strongly correlated (R = 0.942, R(2) = 0.89, p 

    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  7. Than JCM, Saba L, Noor NM, Rijal OM, Kassim RM, Yunus A, et al.
    Comput Biol Med, 2017 10 01;89:197-211.
    PMID: 28825994 DOI: 10.1016/j.compbiomed.2017.08.014
    Lung disease risk stratification is important for both diagnosis and treatment planning, particularly in biopsies and radiation therapy. Manual lung disease risk stratification is challenging because of: (a) large lung data sizes, (b) inter- and intra-observer variability of the lung delineation and (c) lack of feature amalgamation during machine learning paradigm. This paper presents a two stage CADx cascaded system consisting of: (a) semi-automated lung delineation subsystem (LDS) for lung region extraction in CT slices followed by (b) morphology-based lung tissue characterization, thereby addressing the above shortcomings. LDS primarily uses entropy-based region extraction while ML-based lung characterization is mainly based on an amalgamation of directional transforms such as Riesz and Gabor along with texture-based features comprising of 100 greyscale features using the K-fold cross-validation protocol (K = 2, 3, 5 and 10). The lung database consisted of 96 patients: 15 normal and 81 diseased. We use five high resolution Computed Tomography (HRCT) levels representing different anatomy landmarks where disease is commonly seen. We demonstrate the amalgamated ML stratification accuracy of 99.53%, an increase of 2% against the conventional non-amalgamation ML system that uses alone Riesz-based feature embedded with feature selection based on feature strength. The robustness of the system was determined based on the reliability and stability that showed a reliability index of 0.99 and the deviation in risk stratification accuracies less than 5%. Our CADx system shows 10% better performance when compared against the mean of five other prominent studies available in the current literature covering over one decade.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  8. Raghavendra U, Gudigar A, Maithri M, Gertych A, Meiburger KM, Yeong CH, et al.
    Comput Biol Med, 2018 04 01;95:55-62.
    PMID: 29455080 DOI: 10.1016/j.compbiomed.2018.02.002
    Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  9. Abdulhay E, Mohammed MA, Ibrahim DA, Arunkumar N, Venkatraman V
    J Med Syst, 2018 Feb 17;42(4):58.
    PMID: 29455440 DOI: 10.1007/s10916-018-0912-y
    Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  10. Oung QW, Muthusamy H, Basah SN, Lee H, Vijean V
    J Med Syst, 2017 Dec 29;42(2):29.
    PMID: 29288342 DOI: 10.1007/s10916-017-0877-2
    Parkinson's disease (PD) is a type of progressive neurodegenerative disorder that has affected a large part of the population till now. Several symptoms of PD include tremor, rigidity, slowness of movements and vocal impairments. In order to develop an effective diagnostic system, a number of algorithms were proposed mainly to distinguish healthy individuals from the ones with PD. However, most of the previous works were conducted based on a binary classification, with the early PD stage and the advanced ones being treated equally. Therefore, in this work, we propose a multiclass classification with three classes of PD severity level (mild, moderate, severe) and healthy control. The focus is to detect and classify PD using signals from wearable motion and audio sensors based on both empirical wavelet transform (EWT) and empirical wavelet packet transform (EWPT) respectively. The EWT/EWPT was applied to decompose both speech and motion data signals up to five levels. Next, several features are extracted after obtaining the instantaneous amplitudes and frequencies from the coefficients of the decomposed signals by applying the Hilbert transform. The performance of the algorithm was analysed using three classifiers - K-nearest neighbour (KNN), probabilistic neural network (PNN) and extreme learning machine (ELM). Experimental results demonstrated that our proposed approach had the ability to differentiate PD from non-PD subjects, including their severity level - with classification accuracies of more than 90% using EWT/EWPT-ELM based on signals from motion and audio sensors respectively. Additionally, classification accuracy of more than 95% was achieved when EWT/EWPT-ELM is applied to signals from integration of both signal's information.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  11. Liew TS, Schilthuizen M
    PLoS One, 2016;11(6):e0157069.
    PMID: 27280463 DOI: 10.1371/journal.pone.0157069
    Quantitative analysis of organismal form is an important component for almost every branch of biology. Although generally considered an easily-measurable structure, the quantification of gastropod shell form is still a challenge because many shells lack homologous structures and have a spiral form that is difficult to capture with linear measurements. In view of this, we adopt the idea of theoretical modelling of shell form, in which the shell form is the product of aperture ontogeny profiles in terms of aperture growth trajectory that is quantified as curvature and torsion, and of aperture form that is represented by size and shape. We develop a workflow for the analysis of shell forms based on the aperture ontogeny profile, starting from the procedure of data preparation (retopologising the shell model), via data acquisition (calculation of aperture growth trajectory, aperture form and ontogeny axis), and data presentation (qualitative comparison between shell forms) and ending with data analysis (quantitative comparison between shell forms). We evaluate our methods on representative shells of the genera Opisthostoma and Plectostoma, which exhibit great variability in shell form. The outcome suggests that our method is a robust, reproducible, and versatile approach for the analysis of shell form. Finally, we propose several potential applications of our methods in functional morphology, theoretical modelling, taxonomy, and evolutionary biology.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  12. Abdullah KA, McEntee MF, Reed W, Kench PL
    J Med Radiat Sci, 2020 Sep;67(3):170-176.
    PMID: 32219989 DOI: 10.1002/jmrs.387
    INTRODUCTION: 3D-printed imaging phantoms are now increasingly available and used for computed tomography (CT) dose optimisation study and image quality analysis. The aim of this study was to evaluate the integrated 3D-printed cardiac insert phantom when evaluating iterative reconstruction (IR) algorithm in coronary CT angiography (CCTA) protocols.

    METHODS: The 3D-printed cardiac insert phantom was positioned into a chest phantom and scanned with a 16-slice CT scanner. Acquisitions were performed with CCTA protocols using 120 kVp at four different tube currents, 300, 200, 100 and 50 mA (protocols A, B, C and D, respectively). The image data sets were reconstructed with a filtered back projection (FBP) and three different IR algorithm strengths. The image quality metrics of image noise, signal-noise ratio (SNR) and contrast-noise ratio (CNR) were calculated for each protocol.

    RESULTS: Decrease in dose levels has significantly increased the image noise, compared to FBP of protocol A (P 

    Matched MeSH terms: Image Processing, Computer-Assisted/instrumentation*
  13. Usman OL, Muniyandi RC, Omar K, Mohamad M
    PLoS One, 2021;16(2):e0245579.
    PMID: 33630876 DOI: 10.1371/journal.pone.0245579
    Achieving biologically interpretable neural-biomarkers and features from neuroimaging datasets is a challenging task in an MRI-based dyslexia study. This challenge becomes more pronounced when the needed MRI datasets are collected from multiple heterogeneous sources with inconsistent scanner settings. This study presents a method of improving the biological interpretation of dyslexia's neural-biomarkers from MRI datasets sourced from publicly available open databases. The proposed system utilized a modified histogram normalization (MHN) method to improve dyslexia neural-biomarker interpretations by mapping the pixels' intensities of low-quality input neuroimages to range between the low-intensity region of interest (ROIlow) and high-intensity region of interest (ROIhigh) of the high-quality image. This was achieved after initial image smoothing using the Gaussian filter method with an isotropic kernel of size 4mm. The performance of the proposed smoothing and normalization methods was evaluated based on three image post-processing experiments: ROI segmentation, gray matter (GM) tissues volume estimations, and deep learning (DL) classifications using Computational Anatomy Toolbox (CAT12) and pre-trained models in a MATLAB working environment. The three experiments were preceded by some pre-processing tasks such as image resizing, labelling, patching, and non-rigid registration. Our results showed that the best smoothing was achieved at a scale value, σ = 1.25 with a 0.9% increment in the peak-signal-to-noise ratio (PSNR). Results from the three image post-processing experiments confirmed the efficacy of the proposed methods. Evidence emanating from our analysis showed that using the proposed MHN and Gaussian smoothing methods can improve comparability of image features and neural-biomarkers of dyslexia with a statistically significantly high disc similarity coefficient (DSC) index, low mean square error (MSE), and improved tissue volume estimations. After 10 repeated 10-fold cross-validation, the highest accuracy achieved by DL models is 94.7% at a 95% confidence interval (CI) level. Finally, our finding confirmed that the proposed MHN method significantly outperformed the normalization method of the state-of-the-art histogram matching.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods*
  14. Burton A, Byrnes G, Stone J, Tamimi RM, Heine J, Vachon C, et al.
    Breast Cancer Res, 2016 12 19;18(1):130.
    PMID: 27993168
    BACKGROUND: Inter-women and intra-women comparisons of mammographic density (MD) are needed in research, clinical and screening applications; however, MD measurements are influenced by mammography modality (screen film/digital) and digital image format (raw/processed). We aimed to examine differences in MD assessed on these image types.

    METHODS: We obtained 1294 pairs of images saved in both raw and processed formats from Hologic and General Electric (GE) direct digital systems and a Fuji computed radiography (CR) system, and 128 screen-film and processed CR-digital pairs from consecutive screening rounds. Four readers performed Cumulus-based MD measurements (n = 3441), with each image pair read by the same reader. Multi-level models of square-root percent MD were fitted, with a random intercept for woman, to estimate processed-raw MD differences.

    RESULTS: Breast area did not differ in processed images compared with that in raw images, but the percent MD was higher, due to a larger dense area (median 28.5 and 25.4 cm2 respectively, mean √dense area difference 0.44 cm (95% CI: 0.36, 0.52)). This difference in √dense area was significant for direct digital systems (Hologic 0.50 cm (95% CI: 0.39, 0.61), GE 0.56 cm (95% CI: 0.42, 0.69)) but not for Fuji CR (0.06 cm (95% CI: -0.10, 0.23)). Additionally, within each system, reader-specific differences varied in magnitude and direction (p 

    Matched MeSH terms: Image Processing, Computer-Assisted*
  15. Kamarudin ND, Ooi CY, Kawanabe T, Odaguchi H, Kobayashi F
    J Healthc Eng, 2017;2017:7460168.
    PMID: 29065640 DOI: 10.1155/2017/7460168
    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods
  16. Said MA, Musarudin M, Zulkaffli NF
    Ann Nucl Med, 2020 Dec;34(12):884-891.
    PMID: 33141408 DOI: 10.1007/s12149-020-01543-x
    OBJECTIVE: 18F is the most extensively used radioisotope in current clinical practices of PET imaging. This selection is based on the several criteria of pure PET radioisotopes with an optimum half-life, and low positron energy that contributes to a smaller positron range. In addition to 18F, other radioisotopes such as 68Ga and 124I are currently gained much attention with the increase in interest in new PET tracers entering the clinical trials. This study aims to determine the minimal scan time per bed position (Tmin) for the 124I and 68Ga based on the quantitative differences in PET imaging of 68Ga and 124I relative to 18F.

    METHODS: The European Association of Nuclear Medicine (EANM) procedure guidelines version 2.0 for FDG-PET tumor imaging has adhered for this purpose. A NEMA2012/IEC2008 phantom was filled with tumor to background ratio of 10:1 with the activity concentration of 30 kBq/ml ± 10 and 3 kBq/ml ± 10% for each radioisotope. The phantom was scanned using different acquisition times per bed position (1, 5, 7, 10 and 15 min) to determine the Tmin. The definition of Tmin was performed using an image coefficient of variations (COV) of 15%.

    RESULTS: Tmin obtained for 18F, 68Ga and 124I were 3.08, 3.24 and 32.93 min, respectively. Quantitative analyses among 18F, 68Ga and 124I images were performed. Signal-to-noise ratio (SNR), contrast recovery coefficients (CRC), and visibility (VH) are the image quality parameters analysed in this study. Generally, 68Ga and 18F gave better image quality as compared to 124I for all the parameters studied.

    CONCLUSION: We have defined Tmin for 18F, 68Ga and 124I SPECT CT imaging based on NEMA2012/IEC2008 phantom imaging. Despite the long scanning time suggested by Tmin, improvement in the image quality is acquired especially for 124I. In clinical practice, the long acquisition time, nevertheless, may cause patient discomfort and motion artifact.

    Matched MeSH terms: Image Processing, Computer-Assisted/instrumentation; Image Processing, Computer-Assisted/methods
  17. Abas FS, Shana'ah A, Christian B, Hasserjian R, Louissaint A, Pennell M, et al.
    Cytometry A, 2017 06;91(6):609-621.
    PMID: 28110507 DOI: 10.1002/cyto.a.23049
    The advance of high resolution digital scans of pathology slides allowed development of computer based image analysis algorithms that may help pathologists in IHC stains quantification. While very promising, these methods require further refinement before they are implemented in routine clinical setting. Particularly critical is to evaluate algorithm performance in a setting similar to current clinical practice. In this article, we present a pilot study that evaluates the use of a computerized cell quantification method in the clinical estimation of CD3 positive (CD3+) T cells in follicular lymphoma (FL). Our goal is to demonstrate the degree to which computerized quantification is comparable to the practice of estimation by a panel of expert pathologists. The computerized quantification method uses entropy based histogram thresholding to separate brown (CD3+) and blue (CD3-) regions after a color space transformation. A panel of four board-certified hematopathologists evaluated a database of 20 FL images using two different reading methods: visual estimation and manual marking of each CD3+ cell in the images. These image data and the readings provided a reference standard and the range of variability among readers. Sensitivity and specificity measures of the computer's segmentation of CD3+ and CD- T cell are recorded. For all four pathologists, mean sensitivity and specificity measures are 90.97 and 88.38%, respectively. The computerized quantification method agrees more with the manual cell marking as compared to the visual estimations. Statistical comparison between the computerized quantification method and the pathologist readings demonstrated good agreement with correlation coefficient values of 0.81 and 0.96 in terms of Lin's concordance correlation and Spearman's correlation coefficient, respectively. These values are higher than most of those calculated among the pathologists. In the future, the computerized quantification method may be used to investigate the relationship between the overall architectural pattern (i.e., interfollicular vs. follicular) and outcome measures (e.g., overall survival, and time to treatment). © 2017 International Society for Advancement of Cytometry.
    Matched MeSH terms: Image Processing, Computer-Assisted/methods; Image Processing, Computer-Assisted/statistics & numerical data*
  18. Jayaprakash PT
    Forensic Sci Int, 2015 Jan;246:110-21.
    PMID: 25498986 DOI: 10.1016/j.forsciint.2014.10.043
    Establishing identification during skull-photo superimposition relies on correlating the salient morphological features of an unidentified skull with those of a face-image of a suspected dead individual using image overlay processes. Technical progression in the process of overlay has included the incorporation of video cameras, image-mixing devices and software that enables real-time vision-mixing. Conceptual transitions occur in the superimposition methods that involve 'life-size' images, that achieve orientation of the skull to the posture of the face in the photograph and that assess the extent of match. A recent report on the reliability of identification using the superimposition method adopted the currently prevalent methods and suggested an increased rate of failures when skulls were compared with related and unrelated face images. The reported reduction in the reliability of the superimposition method prompted a review of the transition in the concepts that are involved in skull-photo superimposition. The prevalent popular methods for visualizing the superimposed images at less than 'life-size', overlaying skull-face images by relying on the cranial and facial landmarks in the frontal plane when orienting the skull for matching and evaluating the match on a morphological basis by relying on mix-mode alone are the major departures in the methodology that may have reduced the identification reliability. The need to reassess the reliability of the method that incorporates the concepts which have been considered appropriate by the practitioners is stressed.
    Matched MeSH terms: Image Processing, Computer-Assisted
  19. Mousavi SM, Naghsh A, Abu-Bakar SA
    J Digit Imaging, 2015 Aug;28(4):417-27.
    PMID: 25736857 DOI: 10.1007/s10278-015-9770-z
    This paper presents an automatic region of interest (ROI) segmentation method for application of watermarking in medical images. The advantage of using this scheme is that the proposed method is robust against different attacks such as median, Wiener, Gaussian, and sharpening filters. In other words, this technique can produce the same result for the ROI before and after these attacks. The proposed algorithm consists of three main parts; suggesting an automatic ROI detection system, evaluating the robustness of the proposed system against numerous attacks, and finally recommending an enhancement part to increase the strength of the composed system against different attacks. Results obtained from the proposed method demonstrated the promising performance of the method.
    Matched MeSH terms: Image Processing, Computer-Assisted
  20. Zourmand A, Mirhassani SM, Ting HN, Bux SI, Ng KH, Bilgen M, et al.
    Biomed Eng Online, 2014;13:103.
    PMID: 25060583 DOI: 10.1186/1475-925X-13-103
    The phonetic properties of six Malay vowels are investigated using magnetic resonance imaging (MRI) to visualize the vocal tract in order to obtain dynamic articulatory parameters during speech production. To resolve image blurring due to the tongue movement during the scanning process, a method based on active contour extraction is used to track tongue contours. The proposed method efficiently tracks tongue contours despite the partial blurring of MRI images. Consequently, the articulatory parameters that are effectively measured as tongue movement is observed, and the specific shape of the tongue and its position for all six uttered Malay vowels are determined.Speech rehabilitation procedure demands some kind of visual perceivable prototype of speech articulation. To investigate the validity of the measured articulatory parameters based on acoustic theory of speech production, an acoustic analysis based on the uttered vowels by subjects has been performed. As the acoustic speech and articulatory parameters of uttered speech were examined, a correlation between formant frequencies and articulatory parameters was observed. The experiments reported a positive correlation between the constriction location of the tongue body and the first formant frequency, as well as a negative correlation between the constriction location of the tongue tip and the second formant frequency. The results demonstrate that the proposed method is an effective tool for the dynamic study of speech production.
    Matched MeSH terms: Image Processing, Computer-Assisted
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links