Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,3 +1,68 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: gemma
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gemma
|
| 3 |
+
tags:
|
| 4 |
+
- tflite
|
| 5 |
+
- gemma
|
| 6 |
+
- mobile
|
| 7 |
+
- flutter
|
| 8 |
+
- edge-ai
|
| 9 |
+
- quantized
|
| 10 |
+
model_type: gemma
|
| 11 |
+
inference: false
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# Gemma 2B TFLite Model for Mobile
|
| 15 |
+
|
| 16 |
+
This repository contains a TensorFlow Lite version of the Gemma 2B Instruct model, optimized for mobile deployment.
|
| 17 |
+
|
| 18 |
+
## Model Information
|
| 19 |
+
|
| 20 |
+
- **Base Model**: Gemma 2B Instruct
|
| 21 |
+
- **Format**: TensorFlow Lite
|
| 22 |
+
- **Quantization**: INT4 (GPU optimized)
|
| 23 |
+
- **Model Size**: 1.1 GB
|
| 24 |
+
- **Framework**: TensorFlow Lite
|
| 25 |
+
|
| 26 |
+
## Files
|
| 27 |
+
|
| 28 |
+
- `gemma-2b-it-gpu-int4.tflite` - The quantized TFLite model
|
| 29 |
+
- `tokenizer.model` - SentencePiece tokenizer
|
| 30 |
+
|
| 31 |
+
## Usage
|
| 32 |
+
|
| 33 |
+
### Download in Flutter App
|
| 34 |
+
|
| 35 |
+
```dart
|
| 36 |
+
const modelUrl = 'https://huggingface.co/mayur1496/gemma-2b-tflite/resolve/main/gemma-2b-it-gpu-int4.tflite';
|
| 37 |
+
const tokenizerUrl = 'https://huggingface.co/mayur1496/gemma-2b-tflite/resolve/main/tokenizer.model';
|
| 38 |
+
|
| 39 |
+
// Use dio or http package to download
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
### Load Model
|
| 43 |
+
|
| 44 |
+
```dart
|
| 45 |
+
import 'package:tflite_flutter/tflite_flutter.dart';
|
| 46 |
+
|
| 47 |
+
final interpreter = await Interpreter.fromAsset('gemma-2b-it-gpu-int4.tflite');
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
## Requirements
|
| 51 |
+
|
| 52 |
+
- **RAM**: 4GB+ recommended
|
| 53 |
+
- **Storage**: 1.5GB free space
|
| 54 |
+
- **Platform**: Android (API 21+) / iOS (12.0+)
|
| 55 |
+
|
| 56 |
+
## License
|
| 57 |
+
|
| 58 |
+
This model is released under the Gemma license. See the [Gemma License](https://ai.google.dev/gemma/terms) for details.
|
| 59 |
+
|
| 60 |
+
## Citation
|
| 61 |
+
|
| 62 |
+
```bibtex
|
| 63 |
+
@article{gemma_2024,
|
| 64 |
+
title={Gemma: Open Models Based on Gemini Technology},
|
| 65 |
+
author={Gemma Team},
|
| 66 |
+
year={2024}
|
| 67 |
+
}
|
| 68 |
+
```
|