Displaying all 19 publications

Abstract:
Sort:
  1. Zakaria MN, Salim R, Abdul Wahat NH, Md Daud MK, Wan Mohamad WN
    Sci Rep, 2023 Dec 21;13(1):22842.
    PMID: 38129442 DOI: 10.1038/s41598-023-48810-1
    There has been a growing interest in studying the usefulness of chirp stimuli in recording cervical vestibular evoked myogenic potential (cVEMP) waveforms. Nevertheless, the study outcomes are debatable and require verification. In view of this, the aim of the present study was to compare cVEMP results when elicited by 500 Hz tone burst and narrowband (NB) CE-Chirp stimuli in adults with sensorineural hearing loss (SNHL). Fifty adults with bilateral SNHL (aged 20-65 years) underwent the cVEMP testing based on the established protocol. The 500 Hz tone burst and NB CE-Chirp (centred at 500 Hz) stimuli were presented to each ear at an intensity level of 120.5 dB peSPL. P1 latency, N1 latency, and P1-N1 amplitude values were analysed accordingly. The NB CE-Chirp stimulus produced significantly shorter P1 and N1 latencies (p  0.80). In contrast, both stimuli elicited cVEMP responses with P1-N1 amplitude values that were not statistically different from one another (p = 0.157, d = 0.15). Additionally, age and hearing level were found to be significantly correlated (r = 0.56, p 
    Matched MeSH terms: Acoustic Stimulation/methods
  2. Palaniappan R, Phon-Amnuaisuk S, Eswaran C
    Int J Cardiol, 2015;190:262-3.
    PMID: 25932800 DOI: 10.1016/j.ijcard.2015.04.175
    Matched MeSH terms: Acoustic Stimulation/methods
  3. Zakaria MN, Jalaei B, Wahab NA
    Eur Arch Otorhinolaryngol, 2016 Feb;273(2):349-54.
    PMID: 25682179 DOI: 10.1007/s00405-015-3555-3
    For estimating behavioral hearing thresholds, auditory steady state response (ASSR) can be reliably evoked by stimuli at low and high modulation frequencies (MFs). In this regard, little is known regarding ASSR thresholds evoked by stimuli at different MFs in female and male participants. In fact, recent data suggest that 40-Hz ASSR is influenced by estrogen level in females. Hence, the aim of the present study was to determine the effect of gender and MF on ASSR thresholds in young adults. Twenty-eight normally hearing participants (14 males and 14 females) were enrolled in this study. For each subject, ASSR thresholds were recorded with narrow-band chirps at 500, 1,000, 2,000, and 4,000 Hz carrier frequencies (CFs) and at 40 and 90 Hz MFs. Two-way mixed ANOVA (with gender and MF as the factors) revealed no significant interaction effect between factors at all CFs (p > 0.05). The gender effect was only significant at 500 Hz CF (p < 0.05). At 500 and 1,000 Hz CFs, mean ASSR thresholds were significantly lower at 40 Hz MF than at 90 Hz MF (p < 0.05). Interestingly, at 2,000 and 4,000 Hz CFs, mean ASSR thresholds were significantly lower at 90 Hz MF than at 40 Hz MF (p < 0.05). The lower ASSR thresholds in females might be due to hormonal influence. When recording ASSR thresholds at low MF, we suggest the use of gender-specific normative data so that more valid comparisons can be made, particularly at 500 Hz CF.
    Matched MeSH terms: Acoustic Stimulation/methods*
  4. Yuvaraj R, Murugappan M, Ibrahim NM, Sundaraj K, Omar MI, Mohamad K, et al.
    Int J Psychophysiol, 2014 Dec;94(3):482-95.
    PMID: 25109433 DOI: 10.1016/j.ijpsycho.2014.07.014
    In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders.
    Matched MeSH terms: Acoustic Stimulation/methods*
  5. Khairi MD, Din S, Shahid H, Normastura AR
    J Laryngol Otol, 2005 Sep;119(9):678-83.
    PMID: 16156907
    The objective of this prospective study was to report on the prevalence of hearing impairment in the neonatal unit population. From 15 February 2000 to 15 March 2000 and from 15 February 2001 to 15 May 2001, 401 neonates were screened using transient evoked otoacoustic emissions (TEOAE) followed by second-stage screening of those infants who failed the initial test. Eight (2 per cent) infants failed one ear and 23 (5.74 per cent) infants failed both ears, adding up to 7.74 per cent planned for second-stage screening. Five out of 22 infants who came for the follow up failed the screening, resulting in a prevalence of hearing impairment of 1 per cent (95 per cent confidence interval [95% CI]: 0.0-2.0). Craniofacial malformations, very low birth weight, ototoxic medication, stigmata/syndromes associated with hearing loss and hyperbilirubinaemia at the level of exchange tranfusion were identified to be independent significant risk factors for hearing impairment, while poor Apgar scores and mechanical ventilation of more than five days were not. In conclusion, hearing screening in high-risk neonates revealed a total of 1 per cent with hearing loss. The changes in the risk profile indicate improved perinatal handling in a neonatal population at risk for hearing disorders.
    Matched MeSH terms: Acoustic Stimulation/methods
  6. Jalaei B, Shaabani M, Zakaria MN
    Braz J Otorhinolaryngol, 2017 Jan-Feb;83(1):10-15.
    PMID: 27102175 DOI: 10.1016/j.bjorl.2015.12.005
    INTRODUCTION: The performance of auditory steady state response (ASSR) in threshold testing when recorded ipsilaterally and contralaterally, as well as at low and high modulation frequencies (MFs), has not been systematically studied.

    OBJECTIVE: To verify the influences of mode of recording (ipsilateral vs. contralateral) and modulation frequency (40Hz vs. 90Hz) on ASSR thresholds.

    METHODS: Fifteen female and 14 male subjects (aged 18-30 years) with normal hearing bilaterally were studied. Narrow-band CE-chirp(®) stimuli (centerd at 500, 1000, 2000, and 4000Hz) modulated at 40 and 90Hz MFs were presented to the participants' right ear. The ASSR thresholds were then recorded at each test frequency in both ipsilateral and contralateral channels.

    RESULTS: Due to pronounced interaction effects between mode of recording and MF (p<0.05 by two-way repeated measures ANOVA), mean ASSR thresholds were then compared among four conditions (ipsi-40Hz, ipsi-90Hz, contra-40Hz, and contra-90Hz) using one-way repeated measures ANOVA. At the 500 and 1000Hz test frequencies, contra-40Hz condition produced the lowest mean ASSR thresholds. In contrast, at high frequencies (2000 and 4000Hz), ipsi-90Hz condition revealed the lowest mean ASSR thresholds. At most test frequencies, contra-90Hz produced the highest mean ASSR thresholds.

    CONCLUSIONS: Based on the findings, the present study recommends two different protocols for an optimum threshold testing with ASSR, at least when testing young adults. This includes the use of contra-40Hz recording mode due to its promising performance in hearing threshold estimation.
    Matched MeSH terms: Acoustic Stimulation/methods*
  7. Mukari SZMS, Yusof Y, Ishak WS, Maamor N, Chellapan K, Dzulkifli MA
    Braz J Otorhinolaryngol, 2018 12 10;86(2):149-156.
    PMID: 30558985 DOI: 10.1016/j.bjorl.2018.10.010
    INTRODUCTION: Hearing acuity, central auditory processing and cognition contribute to the speech recognition difficulty experienced by older adults. Therefore, quantifying the contribution of these factors on speech recognition problem is important in order to formulate a holistic and effective rehabilitation.

    OBJECTIVE: To examine the relative contributions of auditory functioning and cognition status to speech recognition in quiet and in noise.

    METHODS: We measured speech recognition in quiet and in composite noise using the Malay Hearing in noise test on 72 native Malay speakers (60-82 years) older adults with normal to mild hearing loss. Auditory function included pure tone audiogram, gaps-in-noise, and dichotic digit tests. Cognitive function was assessed using the Malay Montreal cognitive assessment.

    RESULTS: Linear regression analyses using backward elimination technique revealed that had the better ear four frequency average (0.5-4kHz) (4FA), high frequency average and Malay Montreal cognitive assessment attributed to speech perception in quiet (total r2=0.499). On the other hand, high frequency average, Malay Montreal cognitive assessment and dichotic digit tests contributed significantly to speech recognition in noise (total r2=0.307). Whereas the better ear high frequency average primarily measured the speech recognition in quiet, the speech recognition in noise was mainly measured by cognitive function.

    CONCLUSIONS: These findings highlight the fact that besides hearing sensitivity, cognition plays an important role in speech recognition ability among older adults, especially in noisy environments. Therefore, in addition to hearing aids, rehabilitation, which trains cognition, may have a role in improving speech recognition in noise ability of older adults.

    Matched MeSH terms: Acoustic Stimulation/methods
  8. Quar TK, Soli SD, Chan YF, Ishak WS, Abdul Wahat NH
    Int J Audiol, 2017 02;56(2):92-98.
    PMID: 27686009 DOI: 10.1080/14992027.2016.1210828
    OBJECTIVE: This study was conducted to evaluate the speech perception of Malaysian Chinese adults using the Taiwanese Mandarin HINT (MHINT-T) and the Malay HINT (MyHINT).

    DESIGN: The MHINT-T and the MyHINT were presented in quiet and noise (front, right and left) conditions under headphones. Results for the two tests were compared with each other and with the norms for each test.

    STUDY SAMPLE: Malaysian Chinese native speakers of Mandarin (N = 58), 18-31 years of age with normal hearing.

    RESULTS: On average, subjects demonstrated poorer speech perception ability than the normative samples for these tests. Repeated measures ANOVA showed that speech reception thresholds (SRTs) were slightly poorer on the MHINT-T than on the MyHINT for all test conditions. However, normalized SRTs were poorer by 0.6 standard deviations for MyHINT as compared with MHINT-T.

    CONCLUSIONS: MyHINT and MHINT-T can be used as norm-referenced speech perception measures for Mandarin-speaking Chinese in Malaysia.

    Matched MeSH terms: Acoustic Stimulation/methods*
  9. Jalaei B, Azmi MHAM, Zakaria MN
    Braz J Otorhinolaryngol, 2018 05 17;85(4):486-493.
    PMID: 29858160 DOI: 10.1016/j.bjorl.2018.04.005
    INTRODUCTION: Binaurally evoked auditory evoked potentials have good diagnostic values when testing subjects with central auditory deficits. The literature on speech-evoked auditory brainstem response evoked by binaural stimulation is in fact limited. Gender disparities in speech-evoked auditory brainstem response results have been consistently noted but the magnitude of gender difference has not been reported.

    OBJECTIVE: The present study aimed to compare the magnitude of gender difference in speech-evoked auditory brainstem response results between monaural and binaural stimulations.

    METHODS: A total of 34 healthy Asian adults aged 19-30 years participated in this comparative study. Eighteen of them were females (mean age=23.6±2.3 years) and the remaining sixteen were males (mean age=22.0±2.3 years). For each subject, speech-evoked auditory brainstem response was recorded with the synthesized syllable /da/ presented monaurally and binaurally.

    RESULTS: While latencies were not affected (p>0.05), the binaural stimulation produced statistically higher speech-evoked auditory brainstem response amplitudes than the monaural stimulation (p<0.05). As revealed by large effect sizes (d>0.80), substantive gender differences were noted in most of speech-evoked auditory brainstem response peaks for both stimulation modes.

    CONCLUSION: The magnitude of gender difference between the two stimulation modes revealed some distinct patterns. Based on these clinically significant results, gender-specific normative data are highly recommended when using speech-evoked auditory brainstem response for clinical and future applications. The preliminary normative data provided in the present study can serve as the reference for future studies on this test among Asian adults.

    Matched MeSH terms: Acoustic Stimulation/methods*
  10. Jalaei B, Zakaria MN, Mohd Azmi MH, Nik Othman NA, Sidek D
    Ann Otol Rhinol Laryngol, 2017 Apr;126(4):290-295.
    PMID: 28177264 DOI: 10.1177/0003489417690169
    OBJECTIVES: Gender disparities in speech-evoked auditory brainstem response (speech-ABR) outcomes have been reported, but the literature is limited. The present study was performed to further verify this issue and determine the influence of head size on speech-ABR results between genders.

    METHODS: Twenty-nine healthy Malaysian subjects (14 males and 15 females) aged 19 to 30 years participated in this study. After measuring the head circumference, speech-ABR was recorded by using synthesized syllable /da/ from the right ear of each participant. Speech-ABR peaks amplitudes, peaks latencies, and composite onset measures were computed and analyzed.

    RESULTS: Significant gender disparities were noted in the transient component but not in the sustained component of speech-ABR. Statistically higher V/A amplitudes and less steeper V/A slopes were found in females. These gender differences were partially affected after controlling for the head size.

    CONCLUSIONS: Head size is not the main contributing factor for gender disparities in speech-ABR outcomes. Gender-specific normative data can be useful when recording speech-ABR for clinical purposes.

    Matched MeSH terms: Acoustic Stimulation/methods
  11. Zakaria MN, Abdullah R, Nik Othman NA
    Ear Hear, 2018 11 22;40(4):1039-1042.
    PMID: 30461445 DOI: 10.1097/AUD.0000000000000676
    OBJECTIVES: Post-auricular muscle response (PAMR) is a large myogenic potential that can be useful in estimating behavioral hearing thresholds when the recording protocol is optimal. The main aim of the present study was to determine the influence of stimulus repetition rate on PAMR threshold.

    DESIGN: In this repeated-measures study, 20 normally hearing adults aged between 18 and 30 years were recruited. Tone bursts (500, 1000, 2000, and 4000 Hz) were used to record PAMR thresholds at 3 different stimulus repetition rates (6.1/s, 11.1/s, and 17.1/s).

    RESULTS: Statistically higher PAMR thresholds were found for the faster stimulus rate (17.1/s) compared with the slower stimulus rate (6.1/s) (p < 0.05). For all stimulus rates and frequencies, significant correlations were found between PAMR and pure-tone audiometry thresholds (r = 0.62 to 0.82).

    CONCLUSIONS: Even though the stimulus rate effect was significant at most of the tested frequencies, the differences in PAMR thresholds between the rates were small (<5 dB). Nevertheless, based on the correlation results, we suggest the use of 11.1/s stimulus rate when recording PAMR thresholds.

    Matched MeSH terms: Acoustic Stimulation/methods*
  12. Zakaria MN, Jalaei B
    Int J Pediatr Otorhinolaryngol, 2017 Nov;102:28-31.
    PMID: 29106871 DOI: 10.1016/j.ijporl.2017.08.033
    OBJECTIVE: Auditory brainstem responses evoked by complex stimuli such as speech syllables have been studied in normal subjects and subjects with compromised auditory functions. The stability of speech-evoked auditory brainstem response (speech-ABR) when tested over time has been reported but the literature is limited. The present study was carried out to determine the test-retest reliability of speech-ABR in healthy children at a low sensation level.

    METHODS: Seventeen healthy children (6 boys, 11 girls) aged from 5 to 9 years (mean = 6.8 ± 3.3 years) were tested in two sessions separated by a 3-month period. The stimulus used was a 40-ms syllable /da/ presented at 30 dB sensation level.

    RESULTS: As revealed by pair t-test and intra-class correlation (ICC) analyses, peak latencies, peak amplitudes and composite onset measures of speech-ABR were found to be highly replicable. Compared to other parameters, higher ICC values were noted for peak latencies of speech-ABR.

    CONCLUSION: The present study was the first to report the test-retest reliability of speech-ABR recorded at low stimulation levels in healthy children. Due to its good stability, it can be used as an objective indicator for assessing the effectiveness of auditory rehabilitation in hearing-impaired children in future studies.

    Matched MeSH terms: Acoustic Stimulation/methods*
  13. Amir Kassim A, Rehman R, Price JM
    Acta Psychol (Amst), 2018 Apr;185:72-80.
    PMID: 29407247 DOI: 10.1016/j.actpsy.2018.01.012
    Previous research has shown that auditory recognition memory is poorer compared to visual and cross-modal (visual and auditory) recognition memory. The effect of repetition on memory has been robust in showing improved performance. It is not clear, however, how auditory recognition memory compares to visual and cross-modal recognition memory following repetition. Participants performed a recognition memory task, making old/new discriminations to new stimuli, stimuli repeated for the first time after 4-7 intervening items (R1), or repeated for the second time after 36-39 intervening items (R2). Depending on the condition, participants were either exposed to visual stimuli (2D line drawings), auditory stimuli (spoken words), or cross-modal stimuli (pairs of images and associated spoken words). Results showed that unlike participants in the visual and cross-modal conditions, participants in the auditory recognition did not show improvements in performance on R2 trials compared to R1 trials. These findings have implications for pedagogical techniques in education, as well as for interventions and exercises aimed at boosting memory performance.
    Matched MeSH terms: Acoustic Stimulation/methods*
  14. Rahmat S, O'Beirne GA
    Hear Res, 2015 Dec;330(Pt A):125-33.
    PMID: 26209881 DOI: 10.1016/j.heares.2015.07.013
    Schroeder-phase masking complexes have been used in many psychophysical experiments to examine the phase curvature of cochlear filtering at characteristic frequencies, and other aspects of cochlear nonlinearity. In a normal nonlinear cochlea, changing the "scalar factor" of the Schroeder-phase masker from -1 through 0 to +1 results in a marked difference in the measured masked thresholds, whereas this difference is reduced in ears with damaged outer hair cells. Despite the valuable information it may give, one disadvantage of the Schroeder-phase masking procedure is the length of the test - using the conventional three-alternative forced-choice technique to measure a masking function takes around 45 min for one combination of probe frequency and intensity. As an alternative, we have developed a fast method of recording these functions which uses a Békésy tracking procedure. Testing at 500 Hz in normal hearing participants, we demonstrate that our fast method: i) shows good agreement with the conventional method; ii) shows high test-retest reliability; and iii) shortens the testing time to 8 min.
    Matched MeSH terms: Acoustic Stimulation/methods
  15. Maamor N, Billings CJ
    Neurosci Lett, 2017 01 01;636:258-264.
    PMID: 27838448 DOI: 10.1016/j.neulet.2016.11.020
    The purpose of this study was to determine the effects of noise type, signal-to-noise ratio (SNR), age, and hearing status on cortical auditory evoked potentials (CAEPs) to speech sounds. This helps to explain the hearing-in-noise difficulties often seen in the aging and hearing impaired population. Continuous, modulated, and babble noise types were presented at varying SNRs to 30 individuals divided into three groups according to age and hearing status. Significant main effects of noise type, SNR, and group were found. Interaction effects revealed that the SNR effect varies as a function of noise type and is most systematic for continuous noise. Effects of age and hearing loss were limited to CAEP latency and were differentially modulated by energetic and informational-like masking. It is clear that the spectrotemporal characteristics of signals and noises play an important role in determining the morphology of neural responses. Participant factors such as age and hearing status, also play an important role in determining the brain's response to complex auditory stimuli and contribute to the ability to listen in noise.
    Matched MeSH terms: Acoustic Stimulation/methods
  16. Dzulkarnain AAA, Noor Ibrahim SHM, Anuar NFA, Abdullah SA, Tengku Zam Zam TZH, Rahmat S, et al.
    Int J Audiol, 2017 Oct;56(10):723-732.
    PMID: 28415891 DOI: 10.1080/14992027.2017.1313462
    OBJECTIVE: To investigate the influence of two different electrode montages (ipsilateral: reference to mastoid and vertical: reference to nape of neck) to the ABR results recorded using a level-specific (LS)-CE-Chirp® in normally hearing subjects at multiple intensities levels.

    DESIGN: Quasi-experimental and repeated measure study designs were applied in this study. Two different stopping criteria were used, (1) a fixed-signal averaging 4000 sweeps and, (2) a minimum quality indicator of Fmp = 3.1 with a minimum of 800 sweeps.

    STUDY SAMPLE: Twenty-nine normally hearing adults (18 females, 11 male) participated.

    RESULTS: Wave V amplitudes were significantly larger in the LS CE-Chirp® recorded from the vertical montage than the ipsilateral montage. Waves I and III amplitudes were significantly larger from the ipsilateral LS CE-Chirp® than from the other montages and stimulus combinations. The differences in the quality of the ABR recording between the vertical and ipsilateral montages were marginal.

    CONCLUSIONS: Overall, the result suggested that the vertical LS CE-Chirp® ABR had a high potential for a threshold-seeking application, because it produced a higher wave V amplitude. The Ipsilateral LS CE-Chirp® ABR, on the other hand, might also have a high potential for the site of lesion application, because it produced larger waves I and III amplitudes.

    Matched MeSH terms: Acoustic Stimulation/methods*
  17. Dzulkarnain AAA, Abdullah SA, Ruzai MAM, Ibrahim SHMN, Anuar NFA, Rahim 'EA
    Am J Audiol, 2018 Sep 12;27(3):294-305.
    PMID: 30054628 DOI: 10.1044/2018_AJA-17-0087
    Purpose: The purpose of this study was to investigate the influence of 2 different electrode montages (ipsilateral and vertical) on the auditory brainstem response (ABR) findings elicited from narrow band (NB) level-specific (LS) CE-Chirp and tone-burst in subjects with normal hearing at several intensity levels and frequency combinations.

    Method: Quasi-experimental and repeated-measures study designs were used in this study. Twenty-six adults with normal hearing (17 females, 9 males) participated. ABRs were acquired from the study participants at 3 intensity levels (80, 60, and 40 dB nHL), 3 frequencies (500, 1000, and 2000 Hz), 2 electrode montages (ipsilateral and vertical), and 2 stimuli (NB LS CE-Chirp and tone-burst) using 2 stopping criteria (fixed averages at 4,000 sweeps and F test at multiple points = 3.1).

    Results: Wave V amplitudes were only 19%-26% larger for the vertical recordings than the ipsilateral recordings in both the ABRs obtained from the NB LS CE-Chirp and tone-burst stimuli. The mean differences in the F test at multiple points values and the residual noise levels between the ABRs obtained from the vertical and ipsilateral montages were statistically not significant. In addition, the ABR elicited from the NB LS CE-Chirp was significantly larger (up to 69%) than those from the tone-burst, except at the lower intensity level.

    Conclusion: Both the ipsilateral and vertical montages can be used to record ABR to the NB LS CE-Chirp because of the small enhancement in the wave V amplitude provided by the vertical montage.

    Matched MeSH terms: Acoustic Stimulation/methods*
  18. Ibrahim IA, Ting HN, Moghavvemi M
    J Int Adv Otol, 2019 Apr;15(1):87-93.
    PMID: 30924771 DOI: 10.5152/iao.2019.4553
    OBJECTIVES: This study uses a new approach for classifying the human ethnicity according to the auditory brain responses (electroencephalography [EEG] signals) with a high level of accuracy. Moreover, the study presents three different algorithms used to classify the human ethnicity using auditory brain responses. The algorithms were tested on Malays and Chinese as a case study.

    MATERIALS AND METHODS: The EEG signal was used as a brain response signal, which was evoked by two auditory stimuli (Tones and Consonant Vowels stimulus). The study was carried out on Malaysians (Malay and Chinese) with normal hearing and with hearing loss. A ranking process for the subjects' EEG data and the nonlinear features was used to obtain the maximum classification accuracy.

    RESULTS: The study formulated the classification of Normal Hearing Ethnicity Index and Sensorineural Hearing Loss Ethnicity Index. These indices classified the human ethnicity according to brain auditory responses by using numerical values of response signal features. Three classification algorithms were used to verify the human ethnicity. Support Vector Machine (SVM) classified the human ethnicity with an accuracy of 90% in the cases of normal hearing and sensorineural hearing loss (SNHL); the SVM classified with an accuracy of 84%.

    CONCLUSION: The classification indices categorized or separated the human ethnicity in both hearing cases of normal hearing and SNHL with high accuracy. The SVM classifier provided a good accuracy in the classification of the auditory brain responses. The proposed indices might constitute valuable tools for the classification of the brain responses according to the human ethnicity.

    Matched MeSH terms: Acoustic Stimulation/methods
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links