Command-R 35B โ CPT (Continual Pretraining with LoRA)
Model type: Causal Language Model
Base model: CohereLabs/c4ai-command-r-v01
License: Apache 2.0
Framework: Axolotl
Overview
commandr-35b-cpt is a continual-pretrained version of Cohere's Command-R 35B model, trained with LoRA adapters for efficient enregy doman adaptation.
The goal of CPT is to extend the modelโs general reasoning, factual grounding, and domain knowledge across science, governance, and energy-domain text.
Training was performed on the Leonardo EuroHPC system using Axolotl with DeepSpeed ZeRO-1 optimization.
Training Setup
Objective: Language modeling (unsupervised continual pretraining)
Adapter type: LoRA
Precision: bfloat16
Hardware: 8 nodes ร 2 ร NVIDIA A100 64GB GPUs
Framework: DeepSpeed ZeRO-1, Axolotl, PyTorch 2.5.1+cu121
Runtime: ~24 hours
Checkpoints: Saved every 1/5 of an epoch
Dataset
Public energy domain text sources:
arxiv.jsonlโ scientific and technical papersgov.jsonlโ public governmental documentsnews.jsonlโ news articleswiki.jsonlโ Wikipedia text
Hyperparameters
| Parameter | Value |
|---|---|
| Sequence length | 2048 |
| Micro batch size | 1 |
| Gradient accumulation | 4 |
| Epochs | 1 |
| Max steps | 10000 |
| Learning rate | 0.0002 |
| LR scheduler | cosine |
| Optimizer | AdamW (8-bit) |
| Warmup steps | 10 |
| Weight decay | 0.0 |
| LoRA rank (r) | 16 |
| LoRA alpha | 32 |
| LoRA dropout | 0.05 |
| LoRA target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Gradient checkpointing | โ |
| Flash attention | โ |
| Auto resume | โ |
| Loss watchdog threshold | 5.0 |
| Loss watchdog patience | 3 |
Tokenizer
Tokenizer type: AutoTokenizer
Special token: <|end_of_text|> as pad_token
- Downloads last month
- 68