NaNs when fine-tuning

#4
by cbudd - opened

I've only tried initial tests but this model is unstable during finetuning. My first epoch loss is 0.0 and then becomes NaN. The same script runs fine for the 1b variant. Snippet below...

model_config = AutoConfig.from_pretrained(args.model)
model_config.attention_dropout = 0.1
model_config.resid_dropout = 0.1
model = AutoModelForCausalLM.from_pretrained(args.model, config=model_config, torch_dtype=torch.float16, attn_implementation='eager' if "gemma-3" in args.model else "sdpa")

peft_config = LoraConfig(
    r=8, 
    lora_alpha=32,
    target_modules = "all-linear",
    lora_dropout=0.1,
    bias="none"
)
model = get_peft_model(model, peft_config)

config = SFTConfig(
    output_dir=out_dir,
    num_train_epochs=args.epochs,
    per_device_train_batch_size=args.batch_size,
    do_eval=True,
    eval_strategy="steps", 
    eval_steps=20,              
    save_strategy="steps",         
    save_steps=20,  
    save_total_limit=1,
    metric_for_best_model="eval_loss",
    greater_is_better=False,
    save_on_each_node=False,
    weight_decay=0.05,
    fp16=True,
    load_best_model_at_end=True,
)

trainer = SFTTrainer(
    model=model,
    train_dataset=train_dataset,
    eval_dataset=test_dataset,
    args=config,
)
trainer.train()

Resolved, was the combination of amp int he trainer "fp16=True" and loading the model in half precision "torch_dtype=torch.float16".

Google org

thanks for sharing!

can you tell how to resolve this

Google org

Hi ,

The issue appears to be due to a conflict between AMP (fp16=True in Trainer) and loading the model in half-precision (torch_dtype=torch.float16). This can cause instability during training, often leading to NaN loss after the first epoch.

Avoid double AMP application:

If you're enabling AMP via fp16=True, load the model in full precision (torch_dtype=torch.float32), or

If you're loading the model in half-precision (torch_dtype=torch.float16), disable fp16 in the trainer config.

Kindly try and let us know if you have any concerns will assist you. Thank you.

Hey @lkv , on loading gemma-3-270m-it in fp16, it starts giving NaNs with very basic code too

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

print('Testing base Gemma-3-270m-it in FP16...')

# Load in FP16 (like we were doing before the fix)
model = AutoModelForCausalLM.from_pretrained(
  'google/gemma-3-270m-it',
  dtype=torch.float16,
  device_map='auto'
)

tokenizer = AutoTokenizer.from_pretrained('google/gemma-3-270m-it')

text = '<bos><start_of_turn>user\nHello<end_of_turn>\n<start_of_turn>model\n'
inputs = tokenizer(text, return_tensors='pt').to(model.device)

print('Running forward pass in FP16...')
with torch.no_grad():
  outputs = model(**inputs)
  logits = outputs.logits[0, -1, :]

print(f'Logits contain NaN: {torch.isnan(logits).any().item()}')
print(f'Logits contain Inf: {torch.isinf(logits).any().item()}')

if not torch.isnan(logits).any():
  print('Logits min/max:', logits.min().item(), logits.max().item())

  # Try generation
  print('\nTrying generation in FP16...')
  gen_out = model.generate(**inputs, max_new_tokens=10, pad_token_id=tokenizer.eos_token_id)
  print('Generated:', tokenizer.decode(gen_out[0][inputs['input_ids'].shape[1]:]))
  print('\n✅ Base model works fine in FP16!')
else:
  print('\n❌ Base model also has NaN in FP16!')

This fails with ❌ Base model also has NaN in FP16!

Sign up or log in to comment