AdaptLLM commited on
Commit
349ec61
·
verified ·
1 Parent(s): 615332a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  - medical
12
  - chemistry
13
  ---
14
- # Adapting Multimodal Large Language Models to Domains via Post-Training
15
 
16
  This repos contains the **biomedicine MLLM developed from Qwen-2-VL-2B-Instruct** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions).
17
 
@@ -108,10 +108,10 @@ See [Post-Train Guide](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs
108
  ## Citation
109
  If you find our work helpful, please cite us.
110
 
111
- [AdaMLLM](https://huggingface.co/papers/2411.19930)
112
  ```bibtex
113
  @article{adamllm,
114
- title={On Domain-Specific Post-Training for Multimodal Large Language Models},
115
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
116
  journal={arXiv preprint arXiv:2411.19930},
117
  year={2024}
 
11
  - medical
12
  - chemistry
13
  ---
14
+ # Adapting Multimodal Large Language Models to Domains via Post-Training (EMNLP 2025)
15
 
16
  This repos contains the **biomedicine MLLM developed from Qwen-2-VL-2B-Instruct** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [medicine-visual-instructions](https://huggingface.co/datasets/AdaptLLM/medicine-visual-instructions).
17
 
 
108
  ## Citation
109
  If you find our work helpful, please cite us.
110
 
111
+ [Adapt MLLM to Domains](https://huggingface.co/papers/2411.19930) (EMNLP 2025 Findings)
112
  ```bibtex
113
  @article{adamllm,
114
+ title={On Domain-Adaptive Post-Training for Multimodal Large Language Models},
115
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
116
  journal={arXiv preprint arXiv:2411.19930},
117
  year={2024}