Fix: Correct base model from Qwen3-2B to Qwen3-8B
Browse files
README.md
CHANGED
|
@@ -22,7 +22,7 @@ The model.pt file (16.4GB) has been split into **9 shards** of ~2GB each for eas
|
|
| 22 |
|
| 23 |
## Model Details
|
| 24 |
|
| 25 |
-
- **Base Model**: Qwen3-
|
| 26 |
- **Compilation**: optimum-neuron[vllm]==0.3.0
|
| 27 |
- **Compiler Version**: neuronxcc 2.21.33363.0
|
| 28 |
- **Target Hardware**: AWS Trainium (trn1) / Inferentia (inf2)
|
|
|
|
| 22 |
|
| 23 |
## Model Details
|
| 24 |
|
| 25 |
+
- **Base Model**: Qwen3-8B fine-tuned for chess
|
| 26 |
- **Compilation**: optimum-neuron[vllm]==0.3.0
|
| 27 |
- **Compiler Version**: neuronxcc 2.21.33363.0
|
| 28 |
- **Target Hardware**: AWS Trainium (trn1) / Inferentia (inf2)
|