EVALUATION OF THE EFFECTIVENESS OF THE DEVELOPED METHOD FOR CLASSIFYING EMOTIONAL STATES THROUGH SPEECH SIGNALS

Authors: Anayatova R., Koshekov К.
IRSTI 73.37.17

Abstract. This study presents an innovative method for classifying emotional states through speech signals, leveraging advanced signal processing and machine learning techniques. The proposed method incorporates a multi-step approach, including feature extraction, selection, and classification. Initially, key acoustic features such as pitch, intensity, formants, and Mel-frequency cepstral coefficients (MFCCs) are extracted from the speech signals. Subsequently, feature selection techniques are applied to identify the most relevant features for distinguishing different emotional states. The classification is performed using a combination of supervised learning algorithms, including support vector machines (SVM), random forests, and neural networks. To evaluate the effectiveness of the developed method, a comprehensive dataset comprising various emotional speech recordings was utilized. The dataset included diverse emotional states such as happiness, sadness, anger, fear, and neutrality. The performance of the classification models was assessed using standard metrics such as accuracy, precision, recall.
Experimental results demonstrated that the proposed method achieved a high accuracy rate, outperforming existing state-of-the-art techniques. The neural network model, in particular, showed superior performance in capturing the nuances of emotional expressions in speech. Additionally, the feature selection process significantly enhanced the model’s efficiency by reducing computational complexity while maintaining high classification accuracy. In conclusion, the developed method provides a robust and effective solution for classifying emotional states from speech signals, with potential applications in fields such as human-computer interaction, mental health monitoring, and affective computing. Future work will focus on further refining the model by incorporating more diverse datasets and exploring real-time implementation possibilities.

Keywords: emotional state classification, speech signal, feature extraction, machine learning, neural networks, human-computer interaction, affective computing, AI.