adriabama06 commited on
Commit
266bb52
·
verified ·
1 Parent(s): 9b430d0

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +270 -0
README.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ language:
4
+ - multilingual
5
+ inference: false
6
+ license: cc-by-nc-4.0
7
+ library_name: transformers
8
+ base_model:
9
+ - jinaai/ReaderLM-v2
10
+ tags:
11
+ - vllm
12
+ - awq
13
+ - 4bit
14
+ ---
15
+
16
+ <br><br>
17
+
18
+ <p align="center">
19
+ <img src="https://huggingface.co/datasets/jinaai/documentation-images/resolve/main/logo.webp" alt="Jina AI: Your Search Foundation, Supercharged!" width="150px">
20
+ </p>
21
+
22
+ <p align="center">
23
+ <b>Trained by <a href="https://jina.ai/"><b>Jina AI</b></a>.</b>
24
+ </p>
25
+
26
+ [Blog](https://jina.ai/news/readerlm-v2-frontier-small-language-model-for-html-to-markdown-and-json) | [API](https://jina.ai/reader) | [Colab](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing) | [AWS](https://aws.amazon.com/marketplace/pp/prodview-jwfct4j4rvxk2?sr=0-21&ref_=beagle&applicationId=AWSMPContessa) | [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/jinaai.reader-lm-v2-vm)| [Arxiv](https://arxiv.org/abs/2503.01151)
27
+
28
+ # ReaderLM-v2 (AWQ-4bit 128g)
29
+
30
+ `ReaderLM-v2` is a 1.5B parameter language model that converts raw HTML into beautifully formatted markdown or JSON with superior accuracy and improved longer context handling. Supporting multiple languages (29 in total), `ReaderLM-v2` is specialized for tasks involving HTML parsing, transformation, and text extraction.
31
+
32
+ ## What's New in `ReaderLM-v2`
33
+
34
+ `ReaderLM-v2` represents a significant leap forward from its predecessor, with several key improvements:
35
+
36
+ - **Better Markdown Generation**: Thanks to its new training paradigm and higher-quality training data, the model excels at generating complex elements like code fences, nested lists, tables, and LaTeX equations.
37
+ - **JSON Output**: Introduces direct HTML-to-JSON generation using predefined schemas, eliminating the need for intermediate markdown conversion.
38
+ - **Longer Context Handling**: Handles up to 512K tokens combined input and output length, with improved performance on long-form content.
39
+ - **Multilingual Support**: Comprehensive support across 29 languages for broader applications.
40
+ - **Enhanced Stability**: Greatly alleviates degeneration issues after generating long sequences through contrastive loss during training.
41
+
42
+ ## Model Overview
43
+
44
+ - **Model Type**: Autoregressive, decoder-only transformer
45
+ - **Parameter Count**: 1.54B
46
+ - **Context Window**: Up to 512K tokens (combined input and output)
47
+ - **Hidden Size**: 1536
48
+ - **Number of Layers**: 28
49
+ - **Query Heads**: 12
50
+ - **KV Heads**: 2
51
+ - **Head Size**: 128
52
+ - **Intermediate Size**: 8960
53
+ - **Supported Languages**: English, Chinese, Japanese, Korean, French, Spanish, Portuguese, German, Italian, Russian, Vietnamese, Thai, Arabic, and more (29 total)
54
+
55
+ ---
56
+
57
+ # Usage
58
+
59
+ Below, you will find instructions and examples for using `ReaderLM-v2` locally using the Hugging Face Transformers library.
60
+ For a more hands-on experience in a hosted environment, see the [Google Colab Notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing).
61
+
62
+ ## Via Reader API
63
+
64
+ `ReaderLM-v2` is now fully integrated with [Reader API](https://jina.ai/reader/). To use it, simply specify `x-engine: readerlm-v2` in your request headers and enable response streaming with `-H 'Accept: text/event-stream'`:
65
+
66
+ ```bash
67
+ curl https://r.jina.ai/https://news.ycombinator.com/ -H 'x-engine: readerlm-v2' -H 'Accept: text/event-stream'
68
+ ```
69
+
70
+ You can try it without an API key at a lower rate limit. For higher rate limits, you can purchase an API key. Please note that ReaderLM-v2 requests consume 3x the normal token count from your API key allocation. This is currently an experimental feature, and we're working with the GCP team to improve GPU efficiency.
71
+
72
+ ## On Google Colab
73
+
74
+ You can try `ReaderLM-v2` via our [Colab notebook](https://colab.research.google.com/drive/1FfPjZwkMSocOLsEYH45B3B4NxDryKLGI?usp=sharing), which demonstrates HTML-to-markdown conversion, JSON extraction, and instruction-following using the HackerNews frontpage as an example. The notebook is optimized for Colab's free T4 GPU tier and requires `vllm` and `triton` for acceleration and running.
75
+
76
+ Note that the free T4 GPU has limitations—it doesn't support bfloat16 or flash attention 2, leading to higher memory usage and slower processing of longer inputs. Nevertheless, ReaderLM-v2 successfully processes large documents under these constraints, achieving processing speeds of 67 tokens/s input and 36 tokens/s output. For production use, we recommend an RTX 3090/4090 for optimal performance.
77
+
78
+ ## Local Usage
79
+
80
+ To use `ReaderLM-v2` locally:
81
+
82
+ 1. Install the necessary dependencies:
83
+
84
+ ```bash
85
+ pip install transformers
86
+ ```
87
+
88
+ 2. Load and run the model:
89
+
90
+ ```python
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+
93
+ device = "cuda" # or "cpu"
94
+ tokenizer = AutoTokenizer.from_pretrained("jinaai/ReaderLM-v2")
95
+ model = AutoModelForCausalLM.from_pretrained("jinaai/ReaderLM-v2").to(device)
96
+ ```
97
+
98
+ 3. (Optional) Pre-clean your HTML to remove scripts, styles, comments, to reduce the noise and length of the input:
99
+
100
+ ```python
101
+ import re
102
+
103
+ # Patterns
104
+ SCRIPT_PATTERN = r"<[ ]*script.*?\/[ ]*script[ ]*>"
105
+ STYLE_PATTERN = r"<[ ]*style.*?\/[ ]*style[ ]*>"
106
+ META_PATTERN = r"<[ ]*meta.*?>"
107
+ COMMENT_PATTERN = r"<[ ]*!--.*?--[ ]*>"
108
+ LINK_PATTERN = r"<[ ]*link.*?>"
109
+ BASE64_IMG_PATTERN = r'<img[^>]+src="data:image/[^;]+;base64,[^"]+"[^>]*>'
110
+ SVG_PATTERN = r"(<svg[^>]*>)(.*?)(<\/svg>)"
111
+
112
+
113
+ def replace_svg(html: str, new_content: str = "this is a placeholder") -> str:
114
+ return re.sub(
115
+ SVG_PATTERN,
116
+ lambda match: f"{match.group(1)}{new_content}{match.group(3)}",
117
+ html,
118
+ flags=re.DOTALL,
119
+ )
120
+
121
+
122
+ def replace_base64_images(html: str, new_image_src: str = "#") -> str:
123
+ return re.sub(BASE64_IMG_PATTERN, f'<img src="{new_image_src}"/>', html)
124
+
125
+
126
+ def clean_html(html: str, clean_svg: bool = False, clean_base64: bool = False):
127
+ html = re.sub(
128
+ SCRIPT_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
129
+ )
130
+ html = re.sub(
131
+ STYLE_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
132
+ )
133
+ html = re.sub(
134
+ META_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
135
+ )
136
+ html = re.sub(
137
+ COMMENT_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
138
+ )
139
+ html = re.sub(
140
+ LINK_PATTERN, "", html, flags=re.IGNORECASE | re.MULTILINE | re.DOTALL
141
+ )
142
+
143
+ if clean_svg:
144
+ html = replace_svg(html)
145
+ if clean_base64:
146
+ html = replace_base64_images(html)
147
+ return html
148
+ ```
149
+
150
+ 4. Create a prompt for the model:
151
+
152
+ ```python
153
+ def create_prompt(
154
+ text: str, tokenizer=None, instruction: str = None, schema: str = None
155
+ ) -> str:
156
+ """
157
+ Create a prompt for the model with optional instruction and JSON schema.
158
+ """
159
+ if not instruction:
160
+ instruction = "Extract the main content from the given HTML and convert it to Markdown format."
161
+ if schema:
162
+ instruction = "Extract the specified information from a list of news threads and present it in a structured JSON format."
163
+ prompt = f"{instruction}\n```html\n{text}\n```\nThe JSON schema is as follows:```json\n{schema}\n```"
164
+ else:
165
+ prompt = f"{instruction}\n```html\n{text}\n```"
166
+
167
+ messages = [
168
+ {
169
+ "role": "user",
170
+ "content": prompt,
171
+ }
172
+ ]
173
+
174
+ return tokenizer.apply_chat_template(
175
+ messages, tokenize=False, add_generation_prompt=True
176
+ )
177
+ ```
178
+
179
+ ### HTML to Markdown Example
180
+
181
+ ```python
182
+ html = "<html><body><h1>Hello, world!</h1></body></html>"
183
+
184
+ html = clean_html(html)
185
+
186
+ input_prompt = create_prompt(html, tokenizer=tokenizer)
187
+ inputs = tokenizer.encode(input_prompt, return_tensors="pt").to(device)
188
+ outputs = model.generate(
189
+ inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08
190
+ )
191
+
192
+ print(tokenizer.decode(outputs[0]))
193
+ ```
194
+
195
+ ### HTML to JSON Example
196
+
197
+ ```python
198
+ schema = """
199
+ {
200
+ "type": "object",
201
+ "properties": {
202
+ "title": {
203
+ "type": "string"
204
+ },
205
+ "author": {
206
+ "type": "string"
207
+ },
208
+ "date": {
209
+ "type": "string"
210
+ },
211
+ "content": {
212
+ "type": "string"
213
+ }
214
+ },
215
+ "required": ["title", "author", "date", "content"]
216
+ }
217
+ """
218
+
219
+ html = clean_html(html)
220
+ input_prompt = create_prompt(html, tokenizer=tokenizer, schema=schema)
221
+
222
+ inputs = tokenizer.encode(input_prompt, return_tensors="pt").to(device)
223
+ outputs = model.generate(
224
+ inputs, max_new_tokens=1024, temperature=0, do_sample=False, repetition_penalty=1.08
225
+ )
226
+
227
+ print(tokenizer.decode(outputs[0]))
228
+ ```
229
+
230
+ ## Model Performance
231
+
232
+ ReaderLM-v2 has been extensively evaluated on various tasks:
233
+
234
+ ### Quantitative Evaluation
235
+
236
+ For HTML-to-Markdown tasks, the model outperforms much larger models like Qwen2.5-32B-Instruct and Gemini2-flash-expr, achieving:
237
+ - ROUGE-L: 0.84
238
+ - Levenshtein Distance: 0.22
239
+ - Jaro-Winkler Similarity: 0.82
240
+
241
+ For HTML-to-JSON tasks, it shows competitive performance with:
242
+ - F1 Score: 0.81
243
+ - Precision: 0.82
244
+ - Recall: 0.81
245
+ - Pass-Rate: 0.98
246
+
247
+ ### Qualitative Evaluation
248
+
249
+ The model excels in three key dimensions:
250
+ - Content Integrity: 39/50
251
+ - Structural Accuracy: 35/50
252
+ - Format Compliance: 36/50
253
+
254
+ These scores demonstrate strong performance in preserving semantic information, maintaining structural accuracy, and adhering to markdown syntax standards.
255
+
256
+ ## Training Details
257
+
258
+ ReaderLM-v2 is built on Qwen2.5-1.5B-Instruction and trained using a sophisticated pipeline:
259
+
260
+ 1. Data Preparation: Created html-markdown-1m dataset with 1 million HTML documents
261
+ 2. Synthetic Data Generation: Three-step pipeline using Qwen2.5-32B-Instruction
262
+ - Drafting: Initial markdown and JSON generation
263
+ - Refinement: Content cleanup and structure alignment
264
+ - Critique: Quality evaluation and filtering
265
+
266
+ 3. Training Process:
267
+ - Long-context pretraining
268
+ - Supervised fine-tuning
269
+ - Direct preference optimization
270
+ - Self-play reinforcement tuning