Advancing EEG-Based Emotion Recognition: Multimodal Techniques, Channel Optimization, and Insights into Subjective Emotion Perception
Metadata
Show full item recordAuthor
Dharia, Shyamal Y.
Date
2024-11-28Citation
Dharia, Shyamal Y. Advancing EEG-Based Emotion Recognition: Multimodal Techniques, Channel Optimization, and Insights into Subjective Emotion Perception; A Thesis submitted to the Faculty of Graduate Studies of the University of Winnipeg in Partial fulfillment of the requirements of the degree of Master of Science, Department of Applied Computer Science, University of Winnipeg. Winnipeg, Manitoba, Canada: University of Winnipeg, 2024. DOI: 10.36939/ir.202412021534.
Abstract
This dissertation explores the application of electroencephalography (EEG) in identifying and understanding the neural mechanisms of emotional responses. Using a non-invasive and economically efficient OpenBCI Cyton wireless EEG system, this research developed and assessed several technological advancements for optimal neural activity recording. Key among these were the implementation of both software and hardware triggers to ensure precise data acquisition and a comparative analysis of dry versus semi-dry electrodes. The findings suggest that semi-dry electrodes, when used in conjunction with a flexible cap, not only minimize noise but also improve participant comfort, thereby offering clear benefits over the dry electrodes with rigid cap structure by OpenBCI. Additionally, our study of event-related potentials (ERP), which included measures of subjective emotional responses, indicates that the impact of motion versus no motion in stimuli diminishes at higher emotional intensities. This observation suggests that at elevated levels of emotional arousal, the emotional content of the stimuli becomes more salient to the perceiver than the associated motoric information. Moreover, this thesis delineates the development of a deep learning model dedicated to emotion recognition, which successfully achieved a prediction accuracy of 72.3% by incorporating EEG and eye movement data. However, the practical challenges associated with multimodal data collection, particularly among older adults and individuals with neurological disorders, are pronounced. The use of 62 EEG channels, for instance, can be cumbersome and uncomfortable. This challenge has spurred further investigation into the transferability and generalizability of EEG channel selection tailored for emotion recognition tasks across different datasets. Employing a dataset-independent strategy and leveraging Power Spectral Density (PSD) to pinpoint critical EEG channels, our methodology was validated across independent dataset using a Convolutional Neural Network (CNN). Through comprehensive experiments that varied the number of channels and features, our models exhibited classification accuracies of 77.02%, 75.42%, 71.31%, and 64.31% with configurations of 62, 30, 20, and 10 EEG channels, respectively, for four distinct emotion categories. This approach to channel selection not only streamlined the number of EEG channels necessary for accurate emotion prediction but also paved the way for the development of more efficient EEG systems. Such systems are envisioned to facilitate daily emotional monitoring in individuals with neurodegenerative diseases. Lastly, the research also introduced a deep learning model capable of predicting subjective emotional intensities, which achieved an F1-Score of 65.2% in classifying between high and low emotional intensities. This result underscores the significant impact of the subjectivity of emotions on the efficacy of emotion recognition technologies.