decompiler-v2

This repository contains merged fine‑tuned weights for the base model Qwen/Qwen3-4B-Thinking-2507.

  • Task: idiomatic decompilation (assembly → high-level code)
  • Training: LoRA/DoRA adapters trained with TRL SFT on custom assembly→Dart/Swift pairs
  • How to load (merged):
from transformers import AutoModelForCausalLM, AutoTokenizer

repo_id = "raafatabualazm/decompiler-v2"
tok = AutoTokenizer.from_pretrained(repo_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(repo_id, torch_dtype="bfloat16", trust_remote_code=True)

Replace the repo id with your own if you fork or rename this repository.

Downloads last month
13
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for raafatabualazm/decompiler-v2

Adapter
(9)
this model