Sequential Learning for Dance generation

Build Status

Generating dance using deep learning techniques.

The proposed model is shown in the following image:

Proposed Model

The joints of the skeleton employed in the experiment are shown in the following image:


Use of GPU

If you use GPU in your experiment, set --gpu option in appropriately, e.g.,

$ ./ --gpu 0

Default setup uses GPU 0 (--gpu 0). For CPU execution set gpu to -1


The main routine is executed by:

$ ./ --net $net --exp $exp --sequence $sequence --epoch $epochs --stage $stage

Being possible to train different type of datasets ($exp)

To run into a docker container use the file ( instead of (

Unreal Engine 4 Visualization

For demostration from evaluation files or for testing training files use (local/ For realtime emulation execute (


For training and evaluating the following python libraries are required:

Install the following music libraries to convert the audio files:

$ sudo apt-get install libsox-fmt-mp3

Additionally, you may require Marsyas to extract the bet reference information.

For real-time emulation:




[1] Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya Ogata, “Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation”, arXiv

[2] Nelson Yalta, Kazuhiro Nakadai, Tetsuya Ogata, “Sequential Deep Learning for Dancing Motion Generation”, SIG-Challenge 2016