LLM Lora (Uncensored)
Collection
LoRA models to uncensor LLMs.
•
13 items
•
Updated
axolotl version: 0.13.0.dev0
base_model: Qwen/Qwen3-4B-Thinking-2507
datasets:
- path: ICEPVP8977/Uncensored_Small_Reasoning
type: alpaca
output_dir: ./outputs/qwen-4b-thinking-lora-uncensored
sequence_len: 4096
adapter: lora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: adamw_bnb_8bit
learning_rate: 0.0002
load_in_4bit: true
train_on_inputs: false
bf16: auto
Fine-tuned version of Qwen/Qwen3-4B-Thinking-2507 on the ICEPVP8977/Uncensored_Small_Reasoning dataset.
This lora model will fully uncensor the qwen3 4b thinking model, use alpaca instruction template.
Base model
Qwen/Qwen3-4B-Thinking-2507