Tokenized?

#1
by prshnthrv - opened

What tokenizer is used to tokenize this dataset? GPT2?

Also, how was the dataset distribution determined? Is it randomly distributed or is it the first 2B tokens from Pile?

Note for community: I tested the decoding of the tokenised samples on gpt2, llama, pythia, and qwen(3), the only one that gives coherent and correct-looking deocodings is pythia. Hope that helps.

Sign up or log in to comment