SonarSweep Java gpt-oss-20b

Model Details

Model Description

This is a fine-tuned version of openai/gpt-oss-20b, optimized for high-quality Java code generation. SonarSweep was used to create a high-quality Java dataset. Fine-tuning on this dataset has produced a model that generates expert-level Java patterns while avoiding the bugs and vulnerabilities observed when benchmarking the base model.

  • Developed by: Sonar SonarSweep Team
  • Model type: Mixture of Experts
  • Languages: Primarily Java, English
  • License: Apache 2.0

Uses

This model is designed primarily as a demonstration of our SonarSweep pipeline, specifically for fine-tuning on Java.

By using SonarSweep for targeted data preprocessing, pass@1 metrics are maintained across all coding benchmarks, while the number of bugs and vulnerabilities is significantly reduced compared to the base model.

Despite being a demonstration model, we have tested the model to ensure its responses are helpful, natural, and adhere to instructions. We have evaluated a range of benchmarks across software engineering and general use cases to ensure the model remains widely useful.

We focus on gpt-oss-20b in the "low" reasoning setting, which we refer to as gpt-oss-20b-low. In this setting, the base model responds quickly and achieves relatively high scores in code generation benchmarks. This configuration would be appropriate, for example, for developers using an LLM for in-line completions.

As with the base model, the fine-tuned model was trained on data in OpenAI's harmony response format. The model should only be used with the harmony format, as it will not work correctly otherwise.

Technical recommendations for usage are included below; see Getting Started.

Reasoning Capabilities

This model operates exclusively as a low-reasoning model, derived from gpt-oss-20b-low. It is optimized for speed and standard conversational tasks rather than complex chain-of-thought processing.

Please note that specifying or adjusting reasoning effort is not supported. Any parameters attempting to enforce "medium" or "high" reasoning settings will be ignored or may result in an error. The model is hard-coded to a low-reasoning profile.

Bias, Risks, and Limitations

Despite being trained to output high-quality Java code and achieving substantial improvements on our code quality benchmarks, the model is still liable to generate bugs and security vulnerabilities. Users must never treat generated code as production-ready without a thorough review using static analysis—for example, using SonarQube.

Our model's (and the base model's) knowledge is static, based on its training data cutoff. We cannot guarantee adherence to the latest Java standards, best practices in newly released libraries, or correct use of private or proprietary APIs.

Getting Started

You can use the SonarSweep-java-gpt-oss-20b model with Transformers. If you use the Transformers chat template, it will automatically apply the harmony response format. If you use model.generate directly, you need to apply the harmony format manually using the chat template.

Minimum Requirements

  • GPU Memory: 48GB+ VRAM required for the model loaded in bf16 precision
  • Storage: 100GB

To get started, install the necessary dependencies to set up your environment:

pip install -U transformers kernels torch

Once installed, you can run the model using the following snippet:

from transformers import pipeline

model_id = "SonarSource/SonarSweep-java-gpt-oss-20b"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

messages = [
    {"role": "user", "content": "Write a function in Java that creates a two-dimensional array with 5 rows and 2 columns, each element of which is a random number between 1 and 50."},
]

outputs = pipe(
    messages,
    max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])

For more details see here.

Training Details

Training Data

We compiled open-source code data from OpenCoder Datasets and synthetic alignment data generated using openai/gpt-oss-120b to create a Java dataset of 70k examples. We then used SonarSweep to improve the quality of the dataset.

Training Hyperparameters

We trained LoRA adapters across all linear layers of the experts and attention blocks.

Parameter Value
Batch Size 64
Training Epochs 2
Learning Rate 1e-4
LR Scheduler Cosine with 10% Warmup
LoRA Rank 64
LoRA Alpha 128
Attention Mechanism SDPA
Precision bf16 mixed precision

Model Architecture

Property Value
Architecture gpt-oss (Transformer-based Mixture of Experts)
Parameters 20.9 billion (3.6 billion active)
Trainable Parameters 740M (3.4% of total)

Evaluation

Code Quality

We used SonarQube to evaluate the quality, verbosity, and complexity of Java code generated for the ComplexCodeEval and MultiPL-E Java benchmarks.

The fine-tuned and base models achieve a similar pass@1 metric for code generation (within 1% difference). Results for other languages are shown in the next subsection.

The fine-tuned model achieves this metric while generating fewer lines of code.

For code quality, we see a dramatic reduction in both the number and density of Sonar issues, split among bugs, security vulnerabilities, and code smells (see the Glossary for definitions).

Metric Base Model Fine-tuned Model
MultiPL-E Pass@1 71.49 72.37
Lines of Code Generated 247,895 233,031
Bugs Generated 222 123
Bugs per KLOC 0.9 0.53
Security Vulnerabilities 102 56
Vulnerabilities per KLOC 0.41 0.24
Code Smells 4,968 3,796
Code Smells per KLOC 20.04 16.29

For cyclomatic and cognitive complexity, after fine-tuning, there was a reduction in both total complexity and complexity per thousand lines of code.

Complexity Metric Base Model Fine-tuned Model
Cyclomatic (Total) 52,139 45,006
Cyclomatic (per KLOC) 210.33 193.13
Cognitive (Total) 30,871 24,419
Cognitive (per KLOC) 124.53 104.79

Note: KLOC = Thousand Lines of Code.

Code Generation

MultiPL-E provides a multilanguage parallel benchmark for evaluating the performance of LLMs on natural-language-to-code generation tasks. For each language, they provide a translated version of HumanEval and MBPP.

We fine-tuned on Java but chose to evaluate a selection of available languages. For all languages, scores are averages of 10 samples with temperature set to 0.01.

Results

The changes are not significant for any language; this demonstrates that we do not see performance degradation in the fine-tuned model's ability to generate functional code.

Language Dataset Num Examples Base Model Pass@1 Fine-tuned Model Pass@1
Java HumanEval 158 85.40% 84.50%
Java MBPP 386 65.80% 67.40%
Python HumanEval 164 43.06% 47.39%
Python MBPP* 257 26.50% 29.89%
PHP MBPP 397 63.20% 65.80%
TypeScript MBPP 390 74.10% 73.00%
Go MBPP 374 35.00% 36.90%

* This is the sanitized MBPP benchmark from the original Google Research paper.

General Ability: MMLU

The MMLU (Massive Multitask Language Understanding) benchmark evaluates the model's general knowledge with 14,042 multiple-choice questions across a wide range of subjects.

Metric Base Model Fine-tuned Model
Correct Answers 11,081 10,969
Accuracy 78.91% 78.12%

The fine-tuned model maintains comparable performance on MMLU with only a 0.79% decrease in accuracy, demonstrating that specialization in Java code quality does not significantly impact general knowledge capabilities.

Glossary

SonarSweep: A pipeline that analyzes and remediates code for training datasets. For more details, see the announcement on sonarsource.com.

Lines of Code (LOC): The number of lines of code generated, not counting comments or blank lines. For our scale, KLOC (thousand lines of code) is more appropriate.

Code Quality: Using static analysis, SonarQube specifically monitors and measures three core software qualities, each of which has an associated type of issue:

  • Security: The protection of your software from unauthorized access, use, or destruction. Detected security issues are called Vulnerabilities.
  • Reliability: A measure of how your software is capable of maintaining its level of performance under stated conditions for a stated period of time. Detected reliability issues are called Bugs.
  • Maintainability: Refers to the ease with which you can repair, improve, and understand software code. Detected maintainability issues are called Code Smells.

SonarQube analysis for Java supports the detection of a wide range of quality issues. For details, see rules.sonarsource.com.

Acknowledgements

Model Card Authors

SonarSweep Team

For feedback: https://community.sonarsource.com/

Downloads last month
5
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SonarSource/SonarSweep-java-gpt-oss-20b

Base model

openai/gpt-oss-20b
Finetuned
(411)
this model

Datasets used to train SonarSource/SonarSweep-java-gpt-oss-20b

Evaluation results