Music Affective Computing
Between 2021 and 2023, Guru Technology led advancements in Music Affective Computing, conducting innovative research on music emotion and structural analysis across Korea and Nepal. Our team developed a comprehensive music emotion analysis dataset featuring six emotion categories and trained it on the novel GlocalEmoNet architecture, which achieves state-of-the-art performance. GlocalEmoNet excels at capturing both local and global correlations in music, ensuring accurate emotion classification and segmentation. To validate its effectiveness, we utilized visual representations to evaluate classification and segmentation outcomes, showcasing the architecture’s capability in advanced emotion analysis.
For structural analysis, our research focused on Korean traditional music (Pansori), employing two novel classification models, including GlocalMuseNet, and two segmentation models, featuring the DeepLabV3+ network. GlocalMuseNet demonstrated superior performance over the HR network in rhythm classification, while DeepLabV3+ achieved exceptional accuracy in rhythm segmentation. These breakthroughs underline our commitment to advancing affective computing and music analysis using deep learning frameworks.
Publications:
Applied Sciences
https://doi.org/10.3390/app12199571
Multimedia Tools and Applications
https://doi.org/10.1007/s11042-024-18246-4