Raleigh-Durham, North Carolina Area
Built a deep learning model that classifies the emotion of an utterance. Developed a novel feature extraction method that combines handcrafted and learned features. Our results beat the state-of-the-art accuracies in speech emotion recognition (SER) for speaker-dependent experiments on the IEMOCAP dataset by 6%.
Gained proficiency in PyTorch for writing deep learning architectures; NumPy, SciPy, and Pandas for data preprocessing; and Matplotlib for data visualizations.
Read more here: https://rpc21.github.io/data-plus-results/
Paper featured in the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP): https://ieeexplore.ieee.org/document/9054629
Research done through the Data+ summer program at Duke.