Displaying publications 1 - 20 of 22 in total

Abstract:
Sort:
  1. Wang Y, See J, Phan RC, Oh YH
    PLoS One, 2015;10(5):e0124674.
    PMID: 25993498 DOI: 10.1371/journal.pone.0124674
    Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets--SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP) and a super-compact LBP-Three Mean Orthogonal Planes (MOP) not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency.
    Matched MeSH terms: Facial Expression*
  2. Yasmin S, Pathan RK, Biswas M, Khandaker MU, Faruque MRI
    Sensors (Basel), 2020 Sep 21;20(18).
    PMID: 32967087 DOI: 10.3390/s20185391
    Compelling facial expression recognition (FER) processes have been utilized in very successful fields like computer vision, robotics, artificial intelligence, and dynamic texture recognition. However, the FER's critical problem with traditional local binary pattern (LBP) is the loss of neighboring pixels related to different scales that can affect the texture of facial images. To overcome such limitations, this study describes a new extended LBP method to extract feature vectors from images, detecting each image from facial expressions. The proposed method is based on the bitwise AND operation of two rotational kernels applied on LBP(8,1) and LBP(8,2) and utilizes two accessible datasets. Firstly, the facial parts are detected and the essential components of a face are observed, such as eyes, nose, and lips. The portion of the face is then cropped to reduce the dimensions and an unsharp masking kernel is applied to sharpen the image. The filtered images then go through the feature extraction method and wait for the classification process. Four machine learning classifiers were used to verify the proposed method. This study shows that the proposed multi-scale featured local binary pattern (MSFLBP), together with Support Vector Machine (SVM), outperformed the recent LBP-based state-of-the-art approaches resulting in an accuracy of 99.12% for the Extended Cohn-Kanade (CK+) dataset and 89.08% for the Karolinska Directed Emotional Faces (KDEF) dataset.
    Matched MeSH terms: Facial Expression*
  3. Irwantoro K, Nimsha Nilakshi Lennon N, Mareschal I, Miflah Hussain Ismail A
    Q J Exp Psychol (Hove), 2023 Feb;76(2):450-459.
    PMID: 35360991 DOI: 10.1177/17470218221094296
    The influence of context on facial expression classification is most often investigated using simple cues in static faces portraying basic expressions with a fixed emotional intensity. We examined (1) whether a perceptually rich, dynamic audiovisual context, presented in the form of movie clips (to achieve closer resemblance to real life), affected the subsequent classification of dynamic basic (happy) and non-basic (sarcastic) facial expressions and (2) whether people's susceptibility to contextual cues was related to their ability to classify facial expressions viewed in isolation. Participants classified facial expressions-gradually progressing from neutral to happy/sarcastic in increasing intensity-that followed movie clips. Classification was relatively more accurate and faster when the preceding context predicted the upcoming expression, compared with when the context did not. Speeded classifications suggested that predictive contexts reduced the emotional intensity required to be accurately classified. More importantly, we show for the first time that participants' accuracy in classifying expressions without an informative context correlated with the magnitude of the contextual effects experienced by them-poor classifiers of isolated expressions were more susceptible to a predictive context. Our findings support the emerging view that contextual cues and individual differences must be considered when explaining mechanisms underlying facial expression classification.
    Matched MeSH terms: Facial Expression*
  4. Al Qudah M, Mohamed A, Lutfi S
    Sensors (Basel), 2023 Mar 27;23(7).
    PMID: 37050571 DOI: 10.3390/s23073513
    Several studies have been conducted using both visual and thermal facial images to identify human affective states. Despite the advantages of thermal facial images in recognizing spontaneous human affects, few studies have focused on facial occlusion challenges in thermal images, particularly eyeglasses and facial hair occlusion. As a result, three classification models are proposed in this paper to address the problem of thermal occlusion in facial images, with six basic spontaneous emotions being classified. The first proposed model in this paper is based on six main facial regions, including the forehead, tip of the nose, cheeks, mouth, and chin. The second model deconstructs the six main facial regions into multiple subregions to investigate the efficacy of subregions in recognizing the human affective state. The third proposed model in this paper uses selected facial subregions, free of eyeglasses and facial hair (beard, mustaches). Nine statistical features on apex and onset thermal images are implemented. Furthermore, four feature selection techniques with two classification algorithms are proposed for a further investigation. According to the comparative analysis presented in this paper, the results obtained from the three proposed modalities were promising and comparable to those of other studies.
    Matched MeSH terms: Facial Expression*
  5. Alkawaz MH, Basori AH, Mohamad D, Mohamed F
    ScientificWorldJournal, 2014;2014:367013.
    PMID: 25136663 DOI: 10.1155/2014/367013
    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
    Matched MeSH terms: Facial Expression*
  6. Nagarajan R, Hariharan M, Satiyan M
    J Med Syst, 2012 Aug;36(4):2225-34.
    PMID: 21465183 DOI: 10.1007/s10916-011-9690-5
    Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
    Matched MeSH terms: Facial Expression*
  7. Taylor D, Hartmann D, Dezecache G, Te Wong S, Davila-Ross M
    Sci Rep, 2019 03 21;9(1):4961.
    PMID: 30899046 DOI: 10.1038/s41598-019-39932-6
    Facial mimicry is a central feature of human social interactions. Although it has been evidenced in other mammals, no study has yet shown that this phenomenon can reach the level of precision seem in humans and gorillas. Here, we studied the facial complexity of group-housed sun bears, a typically solitary species, with special focus on testing for exact facial mimicry. Our results provided evidence that the bears have the ability to mimic the expressions of their conspecifics and that they do so by matching the exact facial variants they interact with. In addition, the data showed the bears produced the open-mouth faces predominantly when they received the recipient's attention, suggesting a degree of social sensitivity. Our finding questions the relationship between communicative complexity and social complexity, and suggests the possibility that the capacity for complex facial communication is phylogenetically more widespread than previously thought.
    Matched MeSH terms: Facial Expression*
  8. Sheppard E, Pillai D, Wong GT, Ropar D, Mitchell P
    J Autism Dev Disord, 2016 Apr;46(4):1247-54.
    PMID: 26603886 DOI: 10.1007/s10803-015-2662-8
    How well can neurotypical adults' interpret mental states in people with ASD? 'Targets' (ASD and neurotypical) reactions to four events were video-recorded then shown to neurotypical participants whose task was to identify which event the target had experienced. In study 1 participants were more successful for neurotypical than ASD targets. In study 2, participants rated ASD targets equally expressive as neurotypical targets for three of the events, while in study 3 participants gave different verbal descriptions of the reactions of ASD and neurotypical targets. It thus seems people with ASD react differently but not less expressively to events. Because neurotypicals are ineffective in interpreting the behaviour of those with ASD, this could contribute to the social difficulties in ASD.
    Matched MeSH terms: Facial Expression*
  9. Agbolade O, Nazri A, Yaakob R, Ghani AA, Cheah YK
    BMC Bioinformatics, 2019 Dec 02;20(1):619.
    PMID: 31791234 DOI: 10.1186/s12859-019-3153-2
    BACKGROUND: Expression in H-sapiens plays a remarkable role when it comes to social communication. The identification of this expression by human beings is relatively easy and accurate. However, achieving the same result in 3D by machine remains a challenge in computer vision. This is due to the current challenges facing facial data acquisition in 3D; such as lack of homology and complex mathematical analysis for facial point digitization. This study proposes facial expression recognition in human with the application of Multi-points Warping for 3D facial landmark by building a template mesh as a reference object. This template mesh is thereby applied to each of the target mesh on Stirling/ESRC and Bosphorus datasets. The semi-landmarks are allowed to slide along tangents to the curves and surfaces until the bending energy between a template and a target form is minimal and localization error is assessed using Procrustes ANOVA. By using Principal Component Analysis (PCA) for feature selection, classification is done using Linear Discriminant Analysis (LDA).

    RESULT: The localization error is validated on the two datasets with superior performance over the state-of-the-art methods and variation in the expression is visualized using Principal Components (PCs). The deformations show various expression regions in the faces. The results indicate that Sad expression has the lowest recognition accuracy on both datasets. The classifier achieved a recognition accuracy of 99.58 and 99.32% on Stirling/ESRC and Bosphorus, respectively.

    CONCLUSION: The results demonstrate that the method is robust and in agreement with the state-of-the-art results.

    Matched MeSH terms: Facial Expression*
  10. Maruthapillai V, Murugappan M
    PLoS One, 2016;11(2):e0149003.
    PMID: 26859884 DOI: 10.1371/journal.pone.0149003
    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
    Matched MeSH terms: Facial Expression*
  11. Oh YH, See J, Le Ngo AC, Phan RC, Baskaran VM
    Front Psychol, 2018;9:1128.
    PMID: 30042706 DOI: 10.3389/fpsyg.2018.01128
    Over the last few years, automatic facial micro-expression analysis has garnered increasing attention from experts across different disciplines because of its potential applications in various fields such as clinical diagnosis, forensic investigation and security systems. Advances in computer algorithms and video acquisition technology have rendered machine analysis of facial micro-expressions possible today, in contrast to decades ago when it was primarily the domain of psychiatrists where analysis was largely manual. Indeed, although the study of facial micro-expressions is a well-established field in psychology, it is still relatively new from the computational perspective with many interesting problems. In this survey, we present a comprehensive review of state-of-the-art databases and methods for micro-expressions spotting and recognition. Individual stages involved in the automation of these tasks are also described and reviewed at length. In addition, we also deliberate on the challenges and future directions in this growing field of automatic facial micro-expression analysis.
    Matched MeSH terms: Facial Expression
  12. Lim JZ, Mountstephens J, Teo J
    Sensors (Basel), 2020 Apr 22;20(8).
    PMID: 32331327 DOI: 10.3390/s20082384
    The ability to detect users' emotions for the purpose of emotion engineering is currently one of the main endeavors of machine learning in affective computing. Among the more common approaches to emotion detection are methods that rely on electroencephalography (EEG), facial image processing and speech inflections. Although eye-tracking is fast in becoming one of the most commonly used sensor modalities in affective computing, it is still a relatively new approach for emotion detection, especially when it is used exclusively. In this survey paper, we present a review on emotion recognition using eye-tracking technology, including a brief introductory background on emotion modeling, eye-tracking devices and approaches, emotion stimulation methods, the emotional-relevant features extractable from eye-tracking data, and most importantly, a categorical summary and taxonomy of the current literature which relates to emotion recognition using eye-tracking. This review concludes with a discussion on the current open research problems and prospective future research directions that will be beneficial for expanding the body of knowledge in emotion detection using eye-tracking as the primary sensor modality.
    Matched MeSH terms: Facial Expression
  13. Molavi M, Yunus J, Utama NP
    Psychol Res Behav Manag, 2016;9:105-14.
    PMID: 27307772 DOI: 10.2147/PRBM.S100495
    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities - before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing.
    Matched MeSH terms: Facial Expression
  14. Jones BC, DeBruine LM, Flake JK, Liuzza MT, Antfolk J, Arinze NC, et al.
    Nat Hum Behav, 2021 01;5(1):159-169.
    PMID: 33398150 DOI: 10.1038/s41562-020-01007-2
    Over the past 10 years, Oosterhof and Todorov's valence-dominance model has emerged as the most prominent account of how people evaluate faces on social dimensions. In this model, two dimensions (valence and dominance) underpin social judgements of faces. Because this model has primarily been developed and tested in Western regions, it is unclear whether these findings apply to other regions. We addressed this question by replicating Oosterhof and Todorov's methodology across 11 world regions, 41 countries and 11,570 participants. When we used Oosterhof and Todorov's original analysis strategy, the valence-dominance model generalized across regions. When we used an alternative methodology to allow for correlated dimensions, we observed much less generalization. Collectively, these results suggest that, while the valence-dominance model generalizes very well across regions when dimensions are forced to be orthogonal, regional differences are revealed when we use different extraction methods and correlate and rotate the dimension reduction solution. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 5 November 2018. The protocol, as accepted by the journal, can be found at https://doi.org/10.6084/m9.figshare.7611443.v1 .
    Matched MeSH terms: Facial Expression
  15. Ganesan I, Thomas T
    Med J Malaysia, 2011 Dec;66(5):507-9.
    PMID: 22390114 MyJurnal
    The Ochoa syndrome is the association of a non-neurogenic neurogenic bladder with abnormal facial muscle expression. Patients are at risk for renal failure due to obstructive uropathy. We report a family of three siblings, with an emphasis on the abnormalities in facial expression. Careful examination shows an unusual co-contraction of the orbicularis oculi and orbicularis oris muscles only when full facial expressions are exhibited, across a range of emotional or voluntary situations. This suggests a peripheral disorder in facial muscle control. Two thirds of patients have anal sphincter abnormalities. Aberrant organisation of the facial motor and urinary-anal sphincter nuclei may explain these symptoms.
    Matched MeSH terms: Facial Expression*
  16. Hamedi M, Salleh ShH, Tan TS, Ismail K, Ali J, Dee-Uam C, et al.
    Int J Nanomedicine, 2011;6:3461-72.
    PMID: 22267930 DOI: 10.2147/IJN.S26619
    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
    Matched MeSH terms: Facial Expression*
  17. Teoh Y, Wallis E, Stephen ID, Mitchell P
    Cognition, 2017 02;159:48-60.
    PMID: 27886521 DOI: 10.1016/j.cognition.2016.11.003
    Past research tells us that individuals can infer information about a target's emotional state and intentions from their facial expressions (Frith & Frith, 2012), a process known as mentalising. This extends to inferring the events that caused the facial reaction (e.g. Pillai, Sheppard, & Mitchell, 2012; Pillai et al., 2014), an ability known as retrodictive mindreading. Here, we enter new territory by investigating whether or not people (perceivers) can guess a target's social context by observing their response to stimuli. In Experiment 1, perceivers viewed targets' responses and were able to determine whether these targets were alone or observed by another person. In Experiment 2, another group of perceivers, without any knowledge of the social context or what the targets were watching, judged whether targets were hiding or exaggerating their facial expressions; and their judgments discriminated between conditions in which targets were observed and alone. Experiment 3 established that another group of perceivers' judgments of social context were associated with estimations of target expressivity to some degree. In Experiments 1 and 2, the eye movements of perceivers also varied between conditions in which targets were observed and alone. Perceivers were thus able to infer a target's social context from their visible response. The results demonstrate an ability to use other minds as a window onto a social context that could not be seen directly.
    Matched MeSH terms: Facial Expression*
  18. Yuvaraj R, Murugappan M, Norlinah MI, Sundaraj K, Khairiyah M
    Dement Geriatr Cogn Disord, 2013;36(3-4):179-96.
    PMID: 23899462 DOI: 10.1159/000353440
    OBJECTIVE: Patients suffering from stroke have a diminished ability to recognize emotions. This paper presents a review of neuropsychological studies that investigated the basic emotion processing deficits involved in individuals with interhemispheric brain (right, left) damage and normal controls, including processing mode (perception) and communication channels (facial, prosodic-intonational, lexical-verbal).
    METHODS: An electronic search was conducted using specific keywords for studies investigating emotion recognition in brain damage patients. The PubMed database was searched until March 2012 as well as citations and reference lists. 92 potential articles were identified.
    RESULTS: The findings showed that deficits in emotion perception were more frequently observed in individuals with right brain damage than those with left brain damage when processing facial, prosodic and lexical emotional stimuli.
    CONCLUSION: These findings suggest that the right hemisphere has a unique contribution in emotional processing and provide support for the right hemisphere emotion hypothesis.
    SIGNIFICANCE:
    This robust deficit in emotion recognition has clinical significance. The extent of emotion recognition deficit in brain damage patients appears to be correlated with a variety of interpersonal difficulties such as complaints of frustration in social relations, feelings of social discomfort, desire to connect with others, feelings of social disconnection and use of controlling behaviors.
    Matched MeSH terms: Facial Expression
  19. Bhidayasiri R, Rattanachaisit W, Phokaewvarangkul O, Lim TT, Fernandez HH
    Parkinsonism Relat Disord, 2019 Feb;59:74-81.
    PMID: 30502095 DOI: 10.1016/j.parkreldis.2018.11.005
    The proper diagnosis of parkinsonian disorders usually involves three steps: identifying core features of parkinsonism; excluding other causes; and collating supportive evidence based on clinical signs or investigations. While the recognition of cardinal parkinsonian features is usually straightforward, the appreciation of clinical features suggestive of specific parkinsonian disorders can be challenging, and often requires greater experience and skills. In this review, we outline the clinical features that are relevant to the differential diagnosis of common neurodegenerative parkinsonian disorders, including Parkinson's disease, multiple system atrophy, progressive supranuclear palsy, and corticobasal degeneration. We aim to make this process relatable to clinicians-in-practice, therefore, have categorised the list of clinical features into groups according to the typical sequence on how clinicians would elicit them during the examination, starting with observation of facial expression and clinical signs of the face, spotting eye movement abnormalities, examination of tremors and jerky limb movements, and finally, examination of posture and gait dysfunction. This review is not intended to be comprehensive. Rather, we have focused on the most common clinical signs that are potentially key to making the correct diagnosis and those that do not require special skills or training for interpretation. Evidence is also provided, where available, such as diagnostic criteria, consensus statements, clinicopathological studies or large multi-centre registries. Pitfalls are also discussed when relevant to the diagnosis. While no clinical signs are pathognomonic for certain parkinsonian disorders, certain clinical clues may assist in narrowing a differential diagnosis and tailoring focused investigations for the individual patient.
    Matched MeSH terms: Facial Expression
  20. Li C, Yang M, Zhang Y, Lai KW
    Int J Environ Res Public Health, 2022 Nov 14;19(22).
    PMID: 36429697 DOI: 10.3390/ijerph192214976
    PURPOSE: Mental health assessments that combine patients' facial expressions and behaviors have been proven effective, but screening large-scale student populations for mental health problems is time-consuming and labor-intensive. This study aims to provide an efficient and accurate intelligent method for further psychological diagnosis and treatment, which combines artificial intelligence technologies to assist in evaluating the mental health problems of college students.

    MATERIALS AND METHODS: We propose a mixed-method study of mental health assessment that combines psychological questionnaires with facial emotion analysis to comprehensively evaluate the mental health of students on a large scale. The Depression Anxiety and Stress Scale-21(DASS-21) is used for the psychological questionnaire. The facial emotion recognition model is implemented by transfer learning based on neural networks, and the model is pre-trained using FER2013 and CFEE datasets. Among them, the FER2013 dataset consists of 48 × 48-pixel face gray images, a total of 35,887 face images. The CFEE dataset contains 950,000 facial images with annotated action units (au). Using a random sampling strategy, we sent online questionnaires to 400 college students and received 374 responses, and the response rate was 93.5%. After pre-processing, 350 results were available, including 187 male and 153 female students. First, the facial emotion data of students were collected in an online questionnaire test. Then, a pre-trained model was used for emotion recognition. Finally, the online psychological questionnaire scores and the facial emotion recognition model scores were collated to give a comprehensive psychological evaluation score.

    RESULTS: The experimental results of the facial emotion recognition model proposed to show that its classification results are broadly consistent with the mental health survey results. This model can be used to improve efficiency. In particular, the accuracy of the facial emotion recognition model proposed in this paper is higher than that of the general mental health model, which only uses the traditional single questionnaire. Furthermore, the absolute errors of this study in the three symptoms of depression, anxiety, and stress are lower than other mental health survey results and are only 0.8%, 8.1%, 3.5%, and 1.8%, respectively.

    CONCLUSION: The mixed method combining intelligent methods and scales for mental health assessment has high recognition accuracy. Therefore, it can support efficient large-scale screening of students' psychological problems.

    Matched MeSH terms: Facial Expression
Filters
Contact Us

Please provide feedback to Administrator ([email protected])

External Links