--- language: es license: cc-by-4.0 tags: - spanish - roberta pipeline_tag: fill-mask widget: - text: Fui a la librería a comprar un . --- This is a **RoBERTa-base** model trained from scratch in Spanish. The training dataset is [mc4](https://huggingface.co/datasets/bertin-project/mc4-es-sampled ) subsampling documents to a total of about 50 million examples. Sampling is random. This model continued training from [sequence length 128](https://huggingface.co/bertin-project/bertin-base-random) using 20.000 steps for length 512. Please see our main [card](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) for more information. This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google. ## Team members - Eduardo González ([edugp](https://huggingface.co/edugp)) - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Manu Romero ([mrm8488](https://huggingface.co/)) - María Grandury ([mariagrandury](https://huggingface.co/)) - Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps)) - Paulo Villegas ([paulo](https://huggingface.co/paulo))