Qwen-3B-SAST-Python-Remediation-GGUF
This is a fine-tuned version of Qwen/Qwen2.5-3B-Instruct, specialized for suggesting fixes to Python security vulnerabilities found by Static Analysis Security Testing (SAST) tools.
It takes a vulnerability description and a snippet of Python code as input and suggests a high-quality, secure fix. The model was fine-tuned on a custom, high-quality dataset of Python SAST issues.
How to Use It
This model is in GGUF format and is designed to be used with llama.cpp.
Command-Line Inference with llama-cli
- Download the desired GGUF file from the "Files and versions" tab. The
q4_k_mversion is recommended for the best balance of quality and performance. - Run the model using
llama-cliwith the--chatmlprompt format. This is critical for good performance.
# Run the command-line interface with your model
.\build\bin\Release\llama-cli.exe -m model\path\qwen-sast-q4_k_m.gguf --chatml -n 256 -p "Fix the security vulnerability in this code ### Input: def bad_code(): ..."
Model Details
- Base Model: Qwen/Qwen2.5-3B-Instruct
- Fine-tuning: The model was fine-tuned using QLoRA for 3 epochs.
- Dataset: The model was trained on a private, high-quality dataset of Python code vulnerabilities and their corresponding remediations.
Intended Use
This model is intended for security researchers, developers, and DevOps engineers to accelerate the process of fixing common Python security vulnerabilities. It can be used as an assistive tool to suggest fixes that can then be reviewed by a human.
Limitations and Bias
- Language: The model is specialized for Python only and will not perform well on other programming languages.
- Accuracy: While the model produces high-quality fixes, it can still make mistakes or "hallucinate." All suggested code remediations must be carefully reviewed by a human expert before being implemented in production.
- Scope: The model was trained on a specific set of vulnerability types. It may not be effective for highly complex or esoteric security issues.
Local Performance Versus Base
~12% speed boost per fix with local inference on CPU. Fix quality is what really stands out.
# base qwen:
======================================================================
REMEDIATION SUMMARY
======================================================================
Total vulnerabilities: 35
Successfully remediated: 35
Failed: 0
By confidence:
High: 35
Processing time: 803.30s
Average: 22.95s per vulnerability
Results saved to: fixes.json
# fine tuned:
======================================================================
REMEDIATION SUMMARY
======================================================================
Total vulnerabilities: 35
Successfully remediated: 35
Failed: 0
By confidence:
High: 35
Processing time: 710.80s
Average: 20.31s per vulnerability
Results saved to: fixes.json
- Downloads last month
- 35
4-bit
16-bit