APSIPA Transactions on Signal and Information Processing > Vol 11 > Issue 1

Joint Chord and Key Estimation Based on a Hierarchical Variational Autoencoder with Multi-task Learning

Yiming Wu, Graduate School of Informatics, Kyoto University, Japan, Kazuyoshi Yoshii, Graduate School of Informatics, Kyoto University, and PRESTO, Japan Science and Technology Agency, Japan, yoshii@i.kyoto-u.ac.jp
 
Suggested Citation
Yiming Wu and Kazuyoshi Yoshii (2022), "Joint Chord and Key Estimation Based on a Hierarchical Variational Autoencoder with Multi-task Learning", APSIPA Transactions on Signal and Information Processing: Vol. 11: No. 1, e19. http://dx.doi.org/10.1561/116.00000052

Publication Date: 21 Jun 2022
© 2022 Y. Wu and K. Yoshii
 
Subjects
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 1237 times

In this article:
Introduction 
Related Work 
Proposed Method 
Evaluation 
Conclusion 
References 

Abstract

This paper describes a deep generative approach to joint chord and key estimation for music signals. The limited amount of music signals with complete annotations has been the major bottleneck in supervised multi-task learning of a classification model. To overcome this limitation, we integrate the supervised multi-task learning approach with the unsupervised autoencoding approach in a mutually complementary manner. Considering the typical process of music composition, we formulate a hierarchical latent variable model that sequentially generates keys, chords, and chroma vectors. The keys and chords are assumed to follow a language model that represents their relationships and dynamics. In the framework of amortized variational inference (AVI), we introduce a classification model that jointly infers discrete chord and key labels and a recognition model that infers continuous latent features. These models are combined to form a variational autoencoder (VAE) and are trained jointly in a (semi-)supervised manner, where the generative and language models act as regularizers for the classification model. We comprehensively investigate three different architectures for the chord and key classification model, and three different architectures for the language model. Experimental results demonstrate that the VAE-based multi-task learning improves chord estimation as well as key estimation.

DOI:10.1561/116.00000052