GPT-OSS-20B – Differential Diagnosis Radiology Reasoning
This repository provides a LoRA adapter fine-tuned on radiology cases from the Eurorad dataset to enhance differential diagnosis and structured medical reasoning. The adapter attaches to the base model openai/gpt-oss-20b, enabling stronger radiology-focused performance while remaining lightweight and deployable on a single GPU.
Highlights
- Improved differential diagnosis accuracy on Eurorad cases (exact match boost from 78.6% → 86.2%)
- Trained with structured chain-of-thought derived from gpt-oss-120b
- Works with Unsloth, PEFT, and Transformers
Quick Start
🔹 Load with PEFT + Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
BASE = "openai/gpt-oss-20b"
ADAPTER = "alhusains/gpt-oss-20b-eurorad-lora"
tokenizer = AutoTokenizer.from_pretrained(BASE)
base = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto")
model = PeftModel.from_pretrained(base, ADAPTER)
model.eval()
prompt = "Provide a differential diagnosis for multiple bilateral lung nodules."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training Summary
- Dataset: Eurorad radiology case reports (clinical history + imaging findings)
- Supervision: Structured chain-of-thought reasoning generated by gpt-oss-120b
- Objective: Enhance differential diagnosis and structured medical reasoning
- Method: LoRA fine-tuning
- Rank: 32
- Alpha: 64
- Applied to attention, MLP layers, and MoE experts
- Sequence length: 4096 tokens
- Framework: Unsloth + PEFT (4-bit training)
- Precision: bfloat16 mixed precision
- Training schedule: 3 epochs, AdamW, LR = 1e-4 with cosine decay and warmup
- Result: Improved exact-match diagnostic accuracy on Eurorad cases (base 78.6% → fine-tuned 86.2%)
Model tree for alhusains/gpt-oss-20b-ddx
Base model
openai/gpt-oss-20bEvaluation results
- exact-match accuracy on Eurorad radiology casesself-reported0.862