Datasets:
Suggestion: Smaller parquet files?
Loading the 6.5 GB metadata parquet file is painful on Colab. Consider splitting the metadata parquet files into 100 parts or by some other metric.
Cheers
https://github.com/grio43/LORA-tools/tree/main/Pullers
Here is an example on how I deal with them you don't load the entire file. You can stream through them.
Thanks. But thats not the issue.
I'm suggesting the owner of the dataset to split the parquet file into smaller parts to make them more easily digestible.
Have you tried chunking with pyarrow instead of loading the entire parquet in to memory with pandas?
Hahaha. Its not tech support! I mean , yes I can solve it.
I write 'use smaller parqut files' to make life easier for average users. Average users! Not me! Altruism!
But thanks lol , appreciate the pyarrow suggestion.
But like.. normal people will read this convo and have no clue what we are talking about.
I'm suggesting adapting dataset to smaller sizes so normal people can use it , without requiring a lot of prior coding knowledge.
I write 'use smaller parqut files' to make life easier for average users. Average users! Not me! Altruism!
The average user would want a single CSV instead since "parquets are hard", and that would be far worse. xD
Backstory, I do a lot of data processing so those users' training jobs don't that a literal week to start every time due to "easier" formats (but mostly bad trainer design).