--- license: apache-2.0 language: - en pipeline_tag: text-generation base_model: Qwen/Qwen2.5-7B tags: - chat library_name: transformers ---

# AHN: Artificial Hippocampus Networks for Efficient Long-Context Modeling

### Introduction

> Artificial Hippocampus Networks (AHNs) transform lossless memory into fixed-size compressed representations for long-context modeling. Lossless memory (e.g., attention’s key-value (KV) cache) stores exact input information but grows with sequence length, making it inefficient for long sequences. In contrast, compressed memory (e.g., RNNs’ hidden state) maintains a constant size and offers fixed computational costs per input token, but this comes at the cost of information loss. To harness the benefits of both memory types, AHNs continually convert lossless memory outside the sliding attention window into compressed form. AHNs can be instantiated with any RNN-like architectures. The model then integrates both memory types to make predictions across long contexts. This repository hosts the model weights for AHN. For installation, usage instructions, and further documentation, please visit our [GitHub repository](https://github.com/bytedance-seed/AHN). ### Method

**(a)** Illustration of the model augmented with Artificial Hippocampus Networks (AHNs). In this example, the sliding window length is 3. When the input sequence length is less than or equal to the window length, the model operates identically to a standard Transformer. For longer sequences, AHNs continually compress the token outside the window into a compact memory representation. The model then utilizes both the lossless information within window, and the compressed memory to generate the next token. **(b)** Self-distillation training framework of AHNs based on an open-weight LLM. During training, the base LLM's weights are frozen, and only the AHNs' parameters are trained. ### Model Zoo | base model | AHN module | #params | checkpoint (AHN only) | |:---:|:---:| :---:|:---:| | Qwen2.5-3B-Instruct | Mamba2 | 11.9M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-Mamba2-for-Qwen-2.5-Instruct-3B) | | Qwen2.5-3B-Instruct | DeltaNet | 11.8M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-DN-for-Qwen-2.5-Instruct-3B) | | Qwen2.5-3B-Instruct | GatedDeltaNet | 13.0M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-GDN-for-Qwen-2.5-Instruct-3B) | | Qwen2.5-7B-Instruct | Mamba2 | 18.6M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-Mamba2-for-Qwen-2.5-Instruct-7B) | | Qwen2.5-7B-Instruct | DeltaNet | 18.5M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-DN-for-Qwen-2.5-Instruct-7B) | | Qwen2.5-7B-Instruct | GatedDeltaNet | 21.3M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-GDN-for-Qwen-2.5-Instruct-7B) | | Qwen2.5-14B-Instruct | Mamba2 | 51.4M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-Mamba2-for-Qwen-2.5-Instruct-14B) | | Qwen2.5-14B-Instruct | DeltaNet | 51.1M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-DN-for-Qwen-2.5-Instruct-14B) | | Qwen2.5-14B-Instruct | GatedDeltaNet | 61.0M | [🤗model](https://huggingface.co/ByteDance-Seed/AHN-GDN-for-Qwen-2.5-Instruct-14B) | ### Evaluation #### LV-Eval & InfiniteBench Results

#### LongBench Results

## Contact - Yunhao Fang: yunhao.fang@bytedance.com - Weihao Yu (corresponding author): weihao.yu@bytedance.com ## Citation **BibTeX:** ```bibtex @article{fang2025artificial, title={Artificial hippocampus networks for efficient long-context modeling}, author={Fang, Yunhao and Yu, Weihao and Zhong, Shu and Ye, Qinghao and Xiong, Xuehan and Wei, Lai}, journal={arXiv preprint arXiv:2510.07318}, year={2025} } ```