--- license: apache-2.0 datasets: - Yukang/LongAlpaca-12k base_model: - openchat/openchat-3.5-0106 library_name: transformers ---
# OpenChat-3.5-0106_32K-PoSE ## Description This model is [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with the context length extended from 8192 tokens to 32768 tokens using [PoSE](https://huggingface.co/papers/2309.10400). The model was fine-tuned using [Rank-Stabilized LoRA](https://huggingface.co/blog/damjan-k/rslora) and the [LongAlpaca-12K](Yukang/LongAlpaca-12k) dataset. I hope to continue extending the context in future versions and then apply the same methods to my [upscaled versions of OpenChat-3.5](https://huggingface.co/collections/Pretergeek/openchat-35-0106-with-additional-layers-66a8d3262c7c3ebdd7783a29). ## Citations ``` @misc{zhu2024poseefficientcontextwindow, title={PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training}, author={Dawei Zhu and Nan Yang and Liang Wang and Yifan Song and Wenhao Wu and Furu Wei and Sujian Li}, year={2024}, eprint={2309.10400}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2309.10400}, } ```