Displaying all 11 publications

Abstract:
Sort:
  1. Podder KK, Chowdhury MEH, Tahir AM, Mahbub ZB, Khandakar A, Hossain MS, et al.
    Sensors (Basel), 2022 Jan 12;22(2).
    PMID: 35062533 DOI: 10.3390/s22020574
    A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.
    Matched MeSH terms: Sign Language*
  2. Khan RU, Khattak H, Wong WS, AlSalman H, Mosleh MAA, Mizanur Rahman SM
    Comput Intell Neurosci, 2021;2021:9023010.
    PMID: 34925497 DOI: 10.1155/2021/9023010
    The deaf-mutes population always feels helpless when they are not understood by others and vice versa. This is a big humanitarian problem and needs localised solution. To solve this problem, this study implements a convolutional neural network (CNN), convolutional-based attention module (CBAM) to recognise Malaysian Sign Language (MSL) from images. Two different experiments were conducted for MSL signs, using CBAM-2DResNet (2-Dimensional Residual Network) implementing "Within Blocks" and "Before Classifier" methods. Various metrics such as the accuracy, loss, precision, recall, F1-score, confusion matrix, and training time are recorded to evaluate the models' efficiency. The experimental results showed that CBAM-ResNet models achieved a good performance in MSL signs recognition tasks, with accuracy rates of over 90% through a little of variations. The CBAM-ResNet "Before Classifier" models are more efficient than "Within Blocks" CBAM-ResNet models. Thus, the best trained model of CBAM-2DResNet is chosen to develop a real-time sign recognition system for translating from sign language to text and from text to sign language in an easy way of communication between deaf-mutes and other people. All experiment results indicated that the "Before Classifier" of CBAMResNet models is more efficient in recognising MSL and it is worth for future research.
    Matched MeSH terms: Sign Language*
  3. Ahmed MA, Zaidan BB, Zaidan AA, Salih MM, Lakulu MMB
    Sensors (Basel), 2018 Jul 09;18(7).
    PMID: 29987266 DOI: 10.3390/s18072208
    Loss of the ability to speak or hear exerts psychological and social impacts on the affected persons due to the lack of proper communication. Multiple and systematic scholarly interventions that vary according to context have been implemented to overcome disability-related difficulties. Sign language recognition (SLR) systems based on sensory gloves are significant innovations that aim to procure data on the shape or movement of the human hand. Innovative technology for this matter is mainly restricted and dispersed. The available trends and gaps should be explored in this research approach to provide valuable insights into technological environments. Thus, a review is conducted to create a coherent taxonomy to describe the latest research divided into four main categories: development, framework, other hand gesture recognition, and reviews and surveys. Then, we conduct analyses of the glove systems for SLR device characteristics, develop a roadmap for technology evolution, discuss its limitations, and provide valuable insights into technological environments. This will help researchers to understand the current options and gaps in this area, thus contributing to this line of research.
    Matched MeSH terms: Sign Language*
  4. Ewe ELR, Lee CP, Lim KM, Kwek LC, Alqahtani A
    PLoS One, 2024;19(4):e0298699.
    PMID: 38574042 DOI: 10.1371/journal.pone.0298699
    Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.
    Matched MeSH terms: Sign Language*
  5. Pathan RK, Biswas M, Yasmin S, Khandaker MU, Salman M, Youssef AAF
    Sci Rep, 2023 Oct 09;13(1):16975.
    PMID: 37813932 DOI: 10.1038/s41598-023-43852-x
    Sign Language Recognition is a breakthrough for communication among deaf-mute society and has been a critical research topic for years. Although some of the previous studies have successfully recognized sign language, it requires many costly instruments including sensors, devices, and high-end processing power. However, such drawbacks can be easily overcome by employing artificial intelligence-based techniques. Since, in this modern era of advanced mobile technology, using a camera to take video or images is much easier, this study demonstrates a cost-effective technique to detect American Sign Language (ASL) using an image dataset. Here, "Finger Spelling, A" dataset has been used, with 24 letters (except j and z as they contain motion). The main reason for using this dataset is that these images have a complex background with different environments and scene colors. Two layers of image processing have been used: in the first layer, images are processed as a whole for training, and in the second layer, the hand landmarks are extracted. A multi-headed convolutional neural network (CNN) model has been proposed and tested with 30% of the dataset to train these two layers. To avoid the overfitting problem, data augmentation and dynamic learning rate reduction have been used. With the proposed model, 98.981% test accuracy has been achieved. It is expected that this study may help to develop an efficient human-machine communication system for a deaf-mute society.
    Matched MeSH terms: Sign Language*
  6. Raihan MJ, Labib MI, Jim AAJ, Tiang JJ, Biswas U, Nahid AA
    Sensors (Basel), 2024 Aug 19;24(16).
    PMID: 39205045 DOI: 10.3390/s24165351
    Sign language is undoubtedly a common way of communication among deaf and non-verbal people. But it is not common among hearing people to use sign language to express feelings or share information in everyday life. Therefore, a significant communication gap exists between deaf and hearing individuals, despite both groups experiencing similar emotions and sentiments. In this paper, we developed a convolutional neural network-squeeze excitation network to predict the sign language signs and developed a smartphone application to provide access to the ML model to use it. The SE block provides attention to the channel of the image, thus improving the performance of the model. On the other hand, the smartphone application brings the ML model close to people so that everyone can benefit from it. In addition, we used the Shapley additive explanation to interpret the black box nature of the ML model and understand the models working from within. Using our ML model, we achieved an accuracy of 99.86% on the KU-BdSL dataset. The SHAP analysis shows that the model primarily relies on hand-related visual cues to predict sign language signs, aligning with human communication patterns.
    Matched MeSH terms: Sign Language*
  7. Tan CK, Lim KM, Chang RKY, Lee CP, Alqahtani A
    Sensors (Basel), 2023 Jun 14;23(12).
    PMID: 37420722 DOI: 10.3390/s23125555
    Hand gesture recognition (HGR) is a crucial area of research that enhances communication by overcoming language barriers and facilitating human-computer interaction. Although previous works in HGR have employed deep neural networks, they fail to encode the orientation and position of the hand in the image. To address this issue, this paper proposes HGR-ViT, a Vision Transformer (ViT) model with an attention mechanism for hand gesture recognition. Given a hand gesture image, it is first split into fixed size patches. Positional embedding is added to these embeddings to form learnable vectors that capture the positional information of the hand patches. The resulting sequence of vectors are then served as the input to a standard Transformer encoder to obtain the hand gesture representation. A multilayer perceptron head is added to the output of the encoder to classify the hand gesture to the correct class. The proposed HGR-ViT obtains an accuracy of 99.98%, 99.36% and 99.85% for the American Sign Language (ASL) dataset, ASL with Digits dataset, and National University of Singapore (NUS) hand gesture dataset, respectively.
    Matched MeSH terms: Sign Language
  8. Jacob SA, Palanisamy UD, Napier J, Verstegen D, Dhanoa A, Chong EY
    Acad Med, 2021 May 25.
    PMID: 34039854 DOI: 10.1097/ACM.0000000000004181
    There is a need for culturally competent health care providers (HCPs) to provide care to deaf signers, who are members of a linguistic and cultural minority group. Many deaf signers have lower health literacy levels due to deprivation of incidental learning opportunities and inaccessibility of health-related materials, increasing their risk for poorer health outcomes. Communication barriers arise because HCPs are ill-prepared to serve this population, with deaf signers reporting poor-quality interactions. This has translated to errors in diagnosis, patient nonadherence, and ineffective health information, resulting in mistrust of the health care system and reluctance to seek treatment. Sign language interpreters have often not received in-depth medical training, compounding the dynamic process of medical interpreting. HCPs should thus become more culturally competent, empowering them to provide cultural- and language-concordant services to deaf signers. HCPs who received training in cultural competency showed increased knowledge and confidence in interacting with deaf signers. Similarly, deaf signers reported more positive experiences when interacting with medically certified interpreters, HCPs with sign language skills, and practitioners who made an effort to improve communication. However, cultural competency programs within health care education remain inconsistent. Caring for deaf signers requires complex, integrated competencies that need explicit attention and practice repeatedly in realistic, authentic learning tasks ordered from simple to complex. Attention to the needs of deaf signers can start early in the curriculum, using examples of deaf signers in lectures and case discussions, followed by explicit discussions of Deaf cultural norms and the potential risks of low written and spoken language literacy. Students can subsequently engage in role plays with each other or representatives of the local signing deaf community. This would likely ensure that future HCPs are equipped with the knowledge and skills necessary to provide appropriate care and ensure equitable health care access for deaf signers.
    Matched MeSH terms: Sign Language
  9. Mohd Yasin MH, Sahari N, Nasution AH
    MyJurnal
    A literate and numerate population is the goal of any modern industrialized society. Literacy and mathematics skills carry the means by which children are equipped for the education processes on which their future will depend. Deaf and hard of hearing students' reading and mathematics skills are lower than that of others due to their inability. Before enhancing their literacy and mathematics skills, their standard of literacy and mathematics skills should first be identified. For this reason, the Malaysian Ministry of Education initiated the Literacy and Numeracy Screening (LINUS) program in 2009. However, problems arose in the assessment method of LINUS screening for these students since the LINUS screening method does not accommodate these students' situation and needs. Therefore, the researchers introduced internet-based Literacy and Mathematics Assessment (iLiMA) prototype that can overcome those problems. In the iLiMA prototype, sign language instruction video is used to standardize the assessment method in order to ensure that non-bias assessment could be established. The methodology used to develop this system is the Evolutionary Process Model - Prototype. The iLiMA prototype usability was assessed with the Computer System Usability Questionnaire (CSUQ) and conducted by using web-based survey method. The results indicate that the iLiMA prototype is usable and teachers are satisfied with it. Finally, the iLiMA prototype which had the potential to accommodate deaf and hard of hearing students to get a standardized and non-bias literacy and mathematics assessment was developed.
    Matched MeSH terms: Sign Language
  10. Jacob SA, Chong EY, Goh SL, Palanisamy UD
    Mhealth, 2021;7:29.
    PMID: 33898598 DOI: 10.21037/mhealth.2020.01.04
    Background: Deaf and hard-of-hearing (DHH) patients have trouble communicating with community pharmacists and accessing the healthcare system. This study explored the views on a proposed mobile health (mHealth) app in terms of design and features, that will be able to bridge the communication gap between community pharmacists and DHH patients.

    Methods: A community-based participatory research method was utilized. Two focus group discussions (FGDs) were conducted in Malaysian sign language (BIM) with a total of 10 DHH individuals. Respondents were recruited using purposive sampling. Video-recordings were transcribed and analyzed using a thematic approach.

    Results: Two themes emerged: (I) challenges and scepticism of the healthcare system; and (II) features of the mHealth app. Respondents expressed fears and concerns about accessing healthcare services, and stressed on the need for sign language interpreters. There were also concerns about data privacy and security. With regard to app features, the majority preferred videos instead of text to convey information about their disease and medication, due to their lower literacy levels.

    Conclusions: For an mHealth app to be effective, app designers must ensure the app is individualised according to the cultural and linguistic diversity of the target audience. Pharmacists should also educate patients on the potential benefits of the app in terms of assisting patients with their medicine-taking.

    Matched MeSH terms: Sign Language
  11. Majid A, Roberts SG, Cilissen L, Emmorey K, Nicodemus B, O'Grady L, et al.
    Proc Natl Acad Sci U S A, 2018 Nov 06;115(45):11369-11376.
    PMID: 30397135 DOI: 10.1073/pnas.1720419115
    Is there a universal hierarchy of the senses, such that some senses (e.g., vision) are more accessible to consciousness and linguistic description than others (e.g., smell)? The long-standing presumption in Western thought has been that vision and audition are more objective than the other senses, serving as the basis of knowledge and understanding, whereas touch, taste, and smell are crude and of little value. This predicts that humans ought to be better at communicating about sight and hearing than the other senses, and decades of work based on English and related languages certainly suggests this is true. However, how well does this reflect the diversity of languages and communities worldwide? To test whether there is a universal hierarchy of the senses, stimuli from the five basic senses were used to elicit descriptions in 20 diverse languages, including 3 unrelated sign languages. We found that languages differ fundamentally in which sensory domains they linguistically code systematically, and how they do so. The tendency for better coding in some domains can be explained in part by cultural preoccupations. Although languages seem free to elaborate specific sensory domains, some general tendencies emerge: for example, with some exceptions, smell is poorly coded. The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.
    Matched MeSH terms: Sign Language
Related Terms
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links