polito.it
Politecnico di Torino (logo)

Emotion Detection and Recognition

Mostafa Dashti

Emotion Detection and Recognition.

Rel. Gabriella Olmo, Vito De Feo. Politecnico di Torino, Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering), 2021

[img]
Preview
PDF (Tesi_di_laurea) - Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives.

Download (4MB) | Preview
Abstract:

Emotion Detection and Recognition Abstract Alexithymia is a subclinical situation characterized by reduced awareness of a person's emotional states, which has deep effects on mental health and social communication. Though the clinical significance of this condition is high, the impaired anterior insula (AI) leading to alexithymia is unclear. The term "Acquired Alexithymia" refers to alexithymia is shown by a large proportion of people with acquired brain injuries such as stroke (damage in the neural system) or traumatic brain injury caused by accident. The Geneva Emotion Detection Test (GERT) is a performance-based test used by clinical experts to estimate individual variances in the ability of individuals to identify the emotions of others in the face, voice, and body. This ability is one of the main elements of emotional skill. Neuropsychologists use this test to check the possibility of acquired alexithymia in patients that their brain is injured by accident. Traditionally this test is a part of the rehabilitation process settled by clinical experts in a controlled environment. This study describes the structure and results of an application that gives this opportunity to patients to do the test from their home and getting proper treatment. The idea is to develop a new version of the GERT test to improve the accuracy of results based on different sensors and providing an environment to make the communication between doctors and patients remotely. In this research, we plan to use Facial Emotion Expression, Speech Emotion Recognition, Galvanic Skin Response, EEG, and ECG signals. In this study, we focused on Facial Emotion Recognition, which is more reliable in comparison to other methods. An artificial intelligence-based application is developed as a solution for the detection and recognition of emotions in intelligent health contexts. The purpose of this application is to be assessing the patient's emotional state through the analysis of the facial expressions of them. To achieve this goal, at the beginning we trained a convolution neural network model using labeled images from the ImageNet dataset to classify images recorded from patients in 7 different classes of emotions including anger, happiness, sadness, disgust, fear, surprise, and natural. Then we changed our approach to predict the patient’s emotion in terms of two values in continuous space. One for “valence” and one for “arousal”. The new model used labeled images from the AffectNet dataset that contains about 420K images labeled manually with two values for arousal and valence. The valence indicates the pleasantness of the stimulus and is represented along the horizontal axis that goes from the negative pole (unpleasant) to the positive pole (pleasant). Arousal, on the other hand, refers to the intensity of emotion that is represented along the vertical axis with the same range as valence.

Relatori: Gabriella Olmo, Vito De Feo
Anno accademico: 2020/21
Tipo di pubblicazione: Elettronica
Numero di pagine: 58
Soggetti:
Corso di laurea: Corso di laurea magistrale in Ingegneria Informatica (Computer Engineering)
Classe di laurea: Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA
Aziende collaboratrici: Politecnico di Torino
URI: http://webthesis.biblio.polito.it/id/eprint/19122
Modifica (riservato agli operatori) Modifica (riservato agli operatori)