The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Tigre 3-gram Language Model (KenLM)
Overview
This repository provides a 3-gram Language Model (LM) for the Tigre language, trained using the KenLM toolkit. This model is a foundational resource for various downstream NLP and speech applications, including:
- Rescoring hypotheses in Automatic Speech Recognition (ASR).
- Improving text generation and fluency in Machine Translation (MT).
- Performing basic text filtering and quality control.
The model is provided in the highly optimized binary (
.arpa) format, making it suitable for efficient use in production environments.
Model Statistics
This language model was trained using KenLM on the Tigre Monolingual Text Dataset (Tigre-Data 1.0).
| Statistic | Value |
|---|---|
| Model Order | 3-gram |
| Vocabulary Size (Unique 1-grams) | 316,548 |
| Total Unique N-grams (1-to-3) | 1,285,462 |
| Example Perplexity (on 'α€α΅') | 147.12 |
Note: The total raw training tokens used for this model can be found in the Tigre Monolingual Text Dataset card (approximately 14.7 million tokens).
Training Data Source
This model was trained exclusively on the BeitTigreAI/tigre-data-monolingual-text dataset. More detailed information about the training data, including its domain, bias, preprocessing steps, and source statistics, can be found in the dataset's documentation: Tigre Monolingual Text Dataset README
Files and Structure
The repository contains the following files:
tigre-data-kenLM/
βββ README.md
βββ hf_readme.ipynb
βββ tigre-data-kenLM.arpa
How to Use the Model
You can load and query the model using the Python bindings for KenLM (kenlm).
Installation
To use the model in Python, install the KenLM bindings:
!pip install kenlm
## Example Usage (Perplexity and Score)
The following Python code demonstrates how to load the model and query it for log probability and perplexity:
```python
import kenlm
from huggingface_hub import hf_hub_download
# 1. Download the ARPA model file from the Hugging Face Hub
arpa_path = hf_hub_download(
repo_id="BeitTigreAI/tigre-data-kenLM",
filename="tigre-data-kenLM.arpa",
repo_type="model"
)
# 2. Load the KenLM model
lm = kenlm.Model(arpa_path)
# Example single sentence to score
test_sentence = "αααα αααα ααα₯α" # Or use one of the lines from your list
# A. Calculate Log10 Probability of the entire sentence
log_prob = lm.score(test_sentence)
print(f"Sentence: '{test_sentence}'")
print(f"Log10 Probability: {log_prob:.4f}")
# B. Calculate Perplexity of the entire sentence
perplexity = lm.perplexity(test_sentence)
print(f"Perplexity: {perplexity:.2f}")
Licensing and Citation
The Tigre 3-gram Language Model is licensed under CC-BY-SA-4.0.
Citation
If you use this resource in your work, please cite the repository by referencing its Hugging Face entry:
Recommended Citation Format:
Repository Name: Tigre 3-gram Language Model (KenLM)
Organization: BeitTigreAI
URL: https://huggingface.co/datasets/BeitTigreAI/tigre-data-kenLM
- Downloads last month
- 18