File size: 9,139 Bytes
af0fb5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5b4914
af0fb5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b5b4914
 
 
 
 
 
 
 
 
af0fb5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
---
language:
- fr
size_categories:
- 100K<n<1M
task_categories:
- summarization
- image-to-text
- text-generation
tags:
- summarization
- vision
- DeepSeek-OCR
- multilingual
- visual-text-encoding
- random-augmentation
library_name: datasets
license: cc-by-nc-sa-4.0
pretty_name: MLSUM French News Summarization
dataset_info:
  features:
  - name: text
    dtype: string
  - name: summary
    dtype: string
  - name: image
    dtype: image
  - name: source_dataset
    dtype: string
  - name: original_split
    dtype: string
  - name: original_index
    dtype: int64
  splits:
  - name: train
    num_bytes: 35360851312
    num_examples: 392902
  download_size: 34743297549
  dataset_size: 35360851312
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# DeepSynth - MLSUM French News Summarization

## Dataset Description

Large-scale French news summarization dataset from major French newspapers.
Enables training multilingual DeepSeek-OCR models with proper Unicode/diacritics handling.

This dataset is part of the **DeepSynth** project, which uses visual text encoding for multilingual summarization with the DeepSeek-OCR vision-language model. Text documents are converted into images and processed through a frozen 380M parameter visual encoder, enabling 20x token compression while preserving document layout and structure.

### Key Features

- **Original High-Quality Images**: Full-resolution images stored once, augmented on-the-fly during training
- **Random Augmentation Pipeline**: Rotation, perspective, color jitter, and resize transforms for better generalization
- **Visual Text Encoding**: 20x compression ratio (1 visual token ≈ 20 text tokens)
- **Document Structure Preservation**: Layout and formatting maintained through image representation
- **Human-Written Summaries**: High-quality reference summaries for each document
- **Deduplication Tracking**: Source dataset and index tracking prevents duplicates

### Dataset Statistics

- **Total Samples**: ~392,000
- **Language(s)**: French
- **Domain**: French news articles
- **Average Document Length**: ~700 tokens
- **Average Summary Length**: ~50 tokens

### Source Dataset

Based on **MLSUM (MultiLingual SUMmarization)** French subset.
- **Original Authors**: Scialom et al. (2020)
- **Paper**: [MLSUM: The Multilingual Summarization Corpus](https://arxiv.org/abs/2004.14900)
- **License**: CC BY-NC-SA 4.0

## Image Augmentation Pipeline

Images are stored at **original resolution** (up to 1600×2200) and augmented during training for better generalization:

### Available Augmentation Transforms

- **Random Rotation**: ±10° rotation for orientation invariance
- **Random Perspective**: 0.1-0.2 distortion to simulate viewing angles
- **Random Resize**: 512-1600px range for multi-scale learning
- **Color Jitter**: Brightness, contrast, saturation adjustments (±20%)
- **Random Horizontal Flip**: Optional (use with caution for text)

All transforms preserve aspect ratio with padding to maintain text readability. This approach:
- **Reduces storage**: 6x less disk space (single image vs 6 resolutions)
- **Increases flexibility**: Any resolution on-the-fly vs pre-computed fixed sizes
- **Improves generalization**: Random transforms prevent overfitting to specific resolutions

## Dataset Structure

### Data Fields

- `text` (string): Original document text
- `summary` (string): Human-written summary
- `image` (PIL.Image): Original full-size rendered document image (up to 1600×2200)
- `source_dataset` (string): Origin dataset name
- `original_split` (string): Source split (train/validation/test)
- `original_index` (int): Original sample index for deduplication

### Data Example

```python
{
    'text': 'Le gouvernement français a annoncé de nouvelles mesures...',
    'summary': 'Nouvelles mesures gouvernementales contre le changement climatique.',
    'image': <PIL.Image>,  # Original resolution (up to 1600×2200)
    'source_dataset': 'MLSUM (fr)',
    'original_split': 'train',
    'original_index': 0
}
```

## Usage

### Loading the Dataset

```python
from datasets import load_dataset

# Load full dataset
dataset = load_dataset("baconnier/deepsynth-fr")

# Streaming for large datasets
dataset = load_dataset("baconnier/deepsynth-fr", streaming=True)
```

### Training Example with DeepSeek-OCR and Augmentation

```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from datasets import load_dataset
from deepsynth.data.transforms import create_training_transform

# Load model and processor
model = AutoModelForVision2Seq.from_pretrained("deepseek-ai/DeepSeek-OCR")
processor = AutoProcessor.from_pretrained("deepseek-ai/DeepSeek-OCR")

# Load dataset
dataset = load_dataset("baconnier/deepsynth-fr")

# Create augmentation pipeline (random rotation, perspective, resize, color jitter)
transform = create_training_transform(
    target_size_range=(512, 1600),  # Random resize range
    rotation_degrees=10,             # ±10° rotation
    perspective_distortion=0.1,      # Perspective transform
    brightness_factor=0.2,           # ±20% brightness
    contrast_factor=0.2,             # ±20% contrast
)

# Process sample with augmentation
sample = dataset['train'][0]
augmented_image = transform(sample['image'])  # Apply random transforms
inputs = processor(
    images=augmented_image,
    text=sample['text'],
    return_tensors="pt"
)

# Fine-tune decoder only (freeze encoder)
for param in model.encoder.parameters():
    param.requires_grad = False

# Training loop with on-the-fly augmentation...
```

## Training Recommendations

### DeepSeek-OCR Fine-Tuning

```python
# Recommended hyperparameters with augmentation
training_args = {
    "learning_rate": 2e-5,
    "batch_size": 4,
    "gradient_accumulation_steps": 4,
    "num_epochs": 3,
    "mixed_precision": "bf16",
    "freeze_encoder": True,  # IMPORTANT: Only fine-tune decoder

    # Augmentation parameters
    "rotation_degrees": 10,           # Random rotation ±10°
    "perspective_distortion": 0.1,    # Perspective transform
    "resize_range": (512, 1600),      # Random resize 512-1600px
    "brightness_factor": 0.2,         # ±20% brightness
    "contrast_factor": 0.2,           # ±20% contrast
}
```

### Expected Performance

- **Baseline (text-to-text)**: ROUGE-1 ~40-42
- **DeepSeek-OCR (visual)**: ROUGE-1 ~44-47 (typical SOTA)
- **Training Time**: ~6-8 hours on A100 (80GB) for full dataset
- **GPU Memory**: ~40GB with batch_size=4, mixed_precision=bf16

## Dataset Creation

This dataset was created using the **DeepSynth** pipeline:

1. **Source Loading**: Original text documents from MLSUM (fr)
2. **Text-to-Image Conversion**: Documents rendered as PNG images (DejaVu Sans 12pt, Unicode support)
3. **Original Resolution Storage**: Full-quality images stored once (up to 1600×2200)
4. **Incremental Upload**: Batches of 5,000 samples uploaded to HuggingFace Hub
5. **Deduplication**: Source tracking prevents duplicate samples

**Note**: Images are augmented on-the-fly during training using random transformations (rotation, perspective, resize, color jitter) for better generalization across different resolutions and conditions.

### Rendering Specifications

- **Font**: DejaVu Sans 12pt (full Unicode support for multilingual text)
- **Line Wrapping**: 100 characters per line
- **Margin**: 40px
- **Background**: White (255, 255, 255)
- **Text Color**: Black (0, 0, 0)
- **Format**: PNG with lossless compression

## Citation

If you use this dataset in your research, please cite:

```bibtex
@misc{deepsynth-fr,
    title={{DeepSynth MLSUM French News Summarization: Visual Text Encoding with Random Augmentation for Summarization}},
    author={Baconnier},
    year={2025},
    publisher={HuggingFace},
    howpublished={\url{https://huggingface.co/datasets/baconnier/deepsynth-fr}}
}
```

### Source Dataset Citation

```bibtex
@inproceedings{scialom2020mlsum,
    title={MLSUM: The Multilingual Summarization Corpus},
    author={Scialom, Thomas and Dray, Paul-Alexis and Lamprier, Sylvain and Piwowarski, Benjamin and Staiano, Jacopo},
    booktitle={Proceedings of EMNLP},
    year={2020}
}
```

## License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International

**Note**: This dataset inherits the license from the original source dataset. Please review the source license before commercial use.

## Limitations and Bias

- **French-specific**: Requires French language models
- **Diacritics**: Proper handling of accents (é, è, ê, etc.) critical
- **Regional**: May contain France-specific cultural references
- **News domain**: Limited to journalistic style

## Additional Information

### Dataset Curators

Created by the DeepSynth team as part of multilingual visual summarization research.

### Contact

- **Repository**: [DeepSynth GitHub](https://github.com/bacoco/DeepSynth)
- **Issues**: [GitHub Issues](https://github.com/bacoco/DeepSynth/issues)

### Acknowledgments

- **DeepSeek-OCR**: Visual encoder from DeepSeek AI
- **Source Dataset**: MLSUM (fr)
- **HuggingFace**: Dataset hosting and infrastructure