| license: mit | |
| tags: | |
| - audio | |
| - music | |
| - generation | |
| - tensorflow | |
| # Musika Model: yellow-magic-orchestra(complete) | |
| ## Model provided by: nobitachainsaw | |
| Pretrained yellow-magic-orchestra(complete) model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. | |
| Introduced in [this paper](https://arxiv.org/abs/2208.08706). | |
| ## How to use | |
| You can generate music from this pretrained yellow-magic-orchestra(complete) model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r). | |
| ### Model description | |
| This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. | |
| The generator has a context window of about 12 seconds of audio. | |