Reproducibility inquiry
Hello, I have a few questions about the implementation for reproducibility:
Rope implementation: For the rope implementation is it correct to say that the partial application of ROPE in dense attention is because of MLA, but in the indexer no matter what version of dense attention we use the idea is to apply partial ROPE so as to give less bias to the position in the token selection?
Attention implementation: For the Attention implementation, I understood from the paper that it is possible thanks to this to do the implementation be LK instead of L², but in the Github repo and in huggingface the implementation only applies a mask to the causal attention mask and still does L², is there a reason for this?
Warmup and sparse stages: For the warmup and sparse stages, is it correct to say that we want a data-mixture with no padding? we want all samples to have exactly the max sequence length?