how to contribute?
Hello,
Thank you so much for this amazing effort. I was wondering if it is possible to contribute to this project? Is it possible to submit task-specific datasets and prompts, either from some previous papers that are not available in the current version of FineVision, or as completely new tasks?
Hi! Thank you for reaching out! It's possible to contribute and it depends a bit on your compute/storage capabilities. The first way I see possible to contribute is to find new datasets and bring them to the same format we have here to make it easier to add them. Having a FineVision2 with more data in the future is definitely possible, and the first stage (data gathering) is the most human intensive but low resource required.
If you have storage/compute, a great way to contribute would be to run mixture ablations. Is it all the same if we train on the whole dataset or repeat some subsets? What if we remove some particular subsets or downsample them? I manually downsampled some of the sources to avoid making the datasets too skewed (our smoldocling datasets have 10M samples, if I just passed that here then half of this dataset would be Document AI which isn't great for general VLM training)
Thank you, @andito
I downloaded the dataset, sampled some random questions, reviewed some subsets, and tried to get some statistics and analysis with GPT-5. Here are my thoughts:
- Some datasets could benefit from being rewritten by an open-weight model (e.g., Qwen). For example, COCO-Text currently contains very low-quality questions and answers.
- Some datasets are heavily templated (e.g., FigureQA), and it would be useful to have a tag or label to indicate this. If someone wants to run an ablation, these tags can be helpful.
- I think it’s a good idea to categorize datasets by the set of skills they test; for example: visual grounding, world knowledge, commonsense reasoning, OCR, etc.
- Some skills seem to be missing entirely, for instance, image quality assessment questions or detecting anomalies in text-to-image outputs.
I also think there are many more datasets that could be repurposed and incorporated into this collection. For example, robotics datasets for indoor navigation could be included with template-based questions (which are relatively easy to generate and verify).
Will plan to invest more time on this in near future.
I am gonna put this here since relevant and I am gonna get back to it: https://arxiv.org/abs/2510.12225