image/png

Gemma 3 ATSPM Finetune (GGUF)

Model Description

This model is a finetuned version of the google/gemma-3-27b-it model, specialized for the evaluation and interpretation of ATSPM (automated traffic signal performance measures) charts.

The model is designed to analyze and summarize the data presented in these charts, identifying key performance issues, trends, and potential solutions for optimizing traffic flow.

It can be integrated into larger agentic AI systems that pass it ATSPM charts for automated analysis, or used as a standalone chatbot where a user can directly upload charts and work with the model one-on-one.

Training and Evaluation

This model was finetuned on a custom dataset specifically curated for evaluating ATSPM charts, which consists of 25,851 QA pairs and contains nearly 8.5 million tokens. The training was performed using the QLORA method, a parameter-efficient fine-tuning technique that quantizes the base model to 4-bit and then only trains a small percentage of new, low-rank matrices. For this finetune, approximately 6% of the nodes were trained for 2 epochs, making the process highly resource-efficient while maintaining the model's performance.

The performance of the finetuned model was benchmarked against the base model and a commercial SOTA model on our custom evaluation dataset. The results demonstrate the significant improvement gained from the specialized finetuning.

Model Benchmark Score (Custom ATSPM Evaluation)
Gemma 3 27B ATSPM Finetune 43.97%
Gemini 2.5 Pro 43.94%
Gemma 3 27B Instruction-Tuned 24.76%

The finetuned model not only significantly outperforms the base Gemma 3 Instruction-Tuned model but also slightly surpasses the performance of the Gemini 2.5 Pro model on this specific, domain-expert task.

How to Use This GGUF File

This repository contains a GGUF quantized version of the finetuned model, optimized for local inference on a variety of hardware. GGUF files are a common format for running large language models with tools like llama.cpp, Ollama, and LM Studio.

With llama.cpp

You can use the GGUF file with llama.cpp to run the model from the command line.

# Clone the llama.cpp repository
git clone [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
cd llama.cpp

# Make llama.cpp (if you haven't already)
make

# Download the GGUF file from this repository
# Assuming you've downloaded 'gemma_3_27b_atspm_Q8_0.gguf'

# Run the model with a prompt
./main -m gemma_3_27b_atspm_Q8_0.gguf -p "Given the following ATSPM charts, analyze the signal performance and identify any issues with coordination or excessive vehicle delay." -n 128

Citation

If you use this model in your work, please cite it as follows:

@misc{rhone2025atspm-finetune,
  author = {G. Rhone},
  title = {Gemma 3 ATSPM Finetune},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/grhone/gemma-3-27b-atspm-gguf}
}
Downloads last month
4
GGUF
Model size
28B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grhone/gemma-3-27b-atspm-gguf

Quantized
(105)
this model

Dataset used to train grhone/gemma-3-27b-atspm-gguf