π¦ Hawky AI H1 Mini 1B (Experimental)
π Model Description
Hawky AI H1 Mini 1B Experimental is a domain-specific fine-tuned version of Google's Gemma 3 1B Instruct, trained specifically for Meta Andromeda algorithm expertise and creative-first performance marketing strategy.
This experimental model was created to measure the effectiveness of domain-specific fine-tuning for performance marketing knowledge, particularly around Meta's latest advertising AI systems.
π― What This Model Knows
| Domain | Capabilities |
|---|---|
| Meta Andromeda Algorithm | Deep understanding of Meta's AI-powered ad retrieval system, how it matches creatives to users, and optimization strategies |
| Creative Strategy | Best practices for ad creative diversity, video hooks, creative fatigue, and format selection |
| Campaign Structure | Guidance on Advantage+ Shopping, CBO vs ABO, budget allocation, and campaign consolidation |
| Performance Metrics | Interpretation of CPM, CPA, ROAS signals and diagnostic frameworks |
| Audience Strategy | Broad vs narrow targeting under Andromeda, audience signals, and creative-as-targeting |
π§ Chain-of-Thought Reasoning
The model was trained with Chain-of-Thought (CoT) reasoning, meaning it thinks through problems step-by-step before providing answers. Responses include <think> blocks showing the reasoning process.
π Training Details
Base Model
| Property | Value |
|---|---|
| Model | Google Gemma 3 1B Instruct |
| Parameters | ~1 Billion |
| Architecture | Decoder-only Transformer |
Fine-tuning Configuration
| Property | Value |
|---|---|
| Method | QLoRA (4-bit Quantization + LoRA) |
| LoRA Rank (r) | 16 |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Trainable Parameters | ~0.5% of total |
Training Hyperparameters
| Property | Value |
|---|---|
| Epochs | 3 |
| Effective Batch Size | 16 |
| Learning Rate | 2e-4 |
| LR Scheduler | Cosine with warmup |
| Warmup Ratio | 0.1 |
| Precision | FP16 |
| Max Sequence Length | 1024 |
| Optimizer | Paged AdamW 8-bit |
Training Data
| Property | Value |
|---|---|
| Dataset | Hawky Andromeda Dataset |
| Samples | 1,000 Q&A pairs |
| Format | Chain-of-Thought with <think> reasoning blocks |
| Topics | Meta Andromeda, creative strategy, campaign structure, performance optimization |
Training Results
| Metric | Value |
|---|---|
| Final Training Loss | 0.48485241896576353 |
| Training Hardware | NVIDIA Tesla T4 (Google Colab) |
| Training Duration | ~45-60 minutes |
π Quick Start
Installation
pip install transformers peft accelerate bitsandbytes torch
Load the Model
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
# Quantization config (optional, for memory efficiency)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"google/gemma-3-1b-it",
quantization_config=bnb_config,
device_map="auto",
torch_dtype=torch.float16,
)
# Load fine-tuned LoRA adapter
model = PeftModel.from_pretrained(
base_model,
"Ashwath21/hawky-ai-h1-mini-1b-experimental"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("Ashwath21/hawky-ai-h1-mini-1b-experimental")
Generate Responses
def ask_hawky(question):
"""Ask the Hawky AI model a performance marketing question."""
prompt = f"<start_of_turn>user\n{question}<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response.split("<start_of_turn>model")[-1].strip()
# Example
response = ask_hawky("How many creatives should I use in my Advantage+ campaign?")
print(response)
π¬ Example Conversations
Creative Diversity
Q: How many distinct creative concepts should I use in my Advantage+ campaign?
<think>
This is a creative strategy question about Andromeda's diversity requirements.
Key considerations:
- Andromeda treats similar variations as one ad
- Genuine diversity means different hooks, angles, formats
- The algorithm needs variety to optimize effectively
Research indicates 8-15 genuinely distinct concepts is optimal.
</think>
You should aim for 8-15 genuinely distinct creative concepts per campaign.
"Genuinely distinct" means different hooks, angles, formats, or visual styles -
not just minor copy variations. Five product shots with slightly different text
are treated as ONE ad by Andromeda. True diversity gives the algorithm more
tools to match the right creative to each user.
Video Hooks
Q: What makes an effective video hook under Andromeda?
<think>
This is about the critical first 1-3 seconds of video ads.
Under Andromeda, hooks serve two purposes:
1. Stop the scroll (user engagement)
2. Signal audience relevance (algorithm matching)
Effective hooks share certain characteristics.
</think>
Effective video hooks capture attention in the first 1-3 seconds through:
- Pattern interrupts (unexpected visuals)
- Direct benefits ("This saved me $800")
- Compelling questions
- Social proof ("500,000 customers")
- Before/after reveals
Design for sound-off with text overlays. Test 3-5 different hooks per core
video concept - hooks are the highest-impact creative element and fatigue
fastest, so refresh weekly.
Performance Diagnosis
Q: My CPM is rising but conversions are stable. What does this indicate?
<think>
Let me analyze this performance pattern.
CPM rising + conversions stable means:
- Cost per impression is increasing
- But conversion volume hasn't dropped
This is actually a positive signal under Andromeda.
</think>
This is actually a healthy signal! Rising CPM with stable conversions indicates
Andromeda is finding higher-value users who cost more to reach but convert at
better rates. Your effective CPA is likely improving.
Monitor your CPA and ROAS - if those metrics are stable or improving, don't
intervene. The algorithm is optimizing for outcomes, not reach efficiency.
β οΈ Limitations
| Limitation | Description |
|---|---|
| Experimental | This is a research model, not production-ready |
| Domain-Specific | Optimized for Meta advertising; may not perform well on general tasks |
| Knowledge Cutoff | Training data reflects best practices as of January 2026 |
| Model Size | 1B parameters limits complexity of reasoning |
| No Real-Time Data | Cannot access current campaign data or live metrics |
π― Intended Use Cases
β Recommended Uses:
- Research on domain-specific fine-tuning effectiveness
- Learning about Meta Andromeda and creative strategy
- Prototyping AI-assisted marketing tools
- Educational demonstrations
- Brainstorming creative strategies
β Not Recommended For:
- Production deployment without further evaluation
- Financial or business-critical decisions
- Replacing human marketing expertise
- Real-time campaign management
π’ About Hawky.ai
Hawky.ai is an AI-powered creative intelligence platform for digital marketing, providing:
- π Automated Campaign Analysis - Deep insights into ad performance
- π¨ Creative Performance Optimization - Data-driven creative recommendations
- π Competitive Intelligence - Track and analyze competitor strategies
- π€ AI-Powered Insights - Actionable recommendations for performance marketers
Clients include: WPP, Madison, GroupM, and leading performance marketing agencies.
π Citation
@misc{hawky-ai-h1-mini-1b-2026,
author = {Hawky.ai},
title = {Hawky AI H1 Mini 1B Experimental: A Fine-tuned Model for Performance Marketing},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Ashwath21/hawky-ai-h1-mini-1b-experimental}}
}
π License
This model is released under the Gemma License.
The fine-tuning code and dataset methodology are released under Apache 2.0.
π Acknowledgments
- Google for the Gemma 3 1B base model
- Hugging Face for the transformers and PEFT libraries
- Meta for publicly sharing Andromeda algorithm insights
Built with β€οΈ by Hawky.ai