Command-R 35B โ€” CPT (Continual Pretraining with LoRA)

Model type: Causal Language Model
Base model: CohereLabs/c4ai-command-r-v01
License: Apache 2.0
Framework: Axolotl


Overview

commandr-35b-cpt is a continual-pretrained version of Cohere's Command-R 35B model, trained with LoRA adapters for efficient enregy doman adaptation. The goal of CPT is to extend the modelโ€™s general reasoning, factual grounding, and domain knowledge across science, governance, and energy-domain text.

Training was performed on the Leonardo EuroHPC system using Axolotl with DeepSpeed ZeRO-1 optimization.


Training Setup

Objective: Language modeling (unsupervised continual pretraining)
Adapter type: LoRA
Precision: bfloat16
Hardware: 8 nodes ร— 2 ร— NVIDIA A100 64GB GPUs
Framework: DeepSpeed ZeRO-1, Axolotl, PyTorch 2.5.1+cu121
Runtime: ~24 hours
Checkpoints: Saved every 1/5 of an epoch


Dataset

Public energy domain text sources:

  • arxiv.jsonl โ€” scientific and technical papers
  • gov.jsonl โ€” public governmental documents
  • news.jsonl โ€” news articles
  • wiki.jsonl โ€” Wikipedia text

Hyperparameters

Parameter Value
Sequence length 2048
Micro batch size 1
Gradient accumulation 4
Epochs 1
Max steps 10000
Learning rate 0.0002
LR scheduler cosine
Optimizer AdamW (8-bit)
Warmup steps 10
Weight decay 0.0
LoRA rank (r) 16
LoRA alpha 32
LoRA dropout 0.05
LoRA target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Gradient checkpointing โœ…
Flash attention โœ…
Auto resume โœ…
Loss watchdog threshold 5.0
Loss watchdog patience 3

Tokenizer

Tokenizer type: AutoTokenizer
Special token: <|end_of_text|> as pad_token

Downloads last month
68
Safetensors
Model size
35B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ubitech-edg/commandr-35b-cpt

Adapter
(2)
this model
Adapters
1 model

Dataset used to train ubitech-edg/commandr-35b-cpt