DISARM Election Watch - Fine-tuned Llama-3.1 Model
Model Description
This is a fine-tuned version of the Llama-3.1 model specifically optimized for DISARM Framework analysis of election-related content. The model has been trained on a comprehensive dataset of Nigerian election content from multiple platforms to identify and classify disinformation, misinformation, and coordinated influence operations.
Model Details
- Base Model: ArapCheruiyot/disarm_ew-llama3
 - Fine-tuning Method: LoRA (Low-Rank Adaptation)
 - Optimization: Apple Silicon (M1 Max) optimized
 - Training Data: 6,019 examples from multiple sources
 - Task: DISARM Framework classification and narrative analysis
 - Language: English
 - License: MIT
 
Training Configuration
- LoRA Rank: 16
 - Batch Size: 1
 - Learning Rate: 3e-4
 - Sequence Length: 2048
 - Training Iterations: 600
 - Final Training Loss: 1.064
 - Final Validation Loss: 1.354
 - Framework: MLX-LM
 - Hardware: Apple M1 Max (64GB RAM)
 
Quick Start
Using with MLX-LM
from mlx_lm import load, generate
# Load the complete fine-tuned model
model, tokenizer = load("models/disarm_ew_llama3_finetuned")
# Example prompt
prompt = """### Instruction:
Classify the following content according to DISARM Framework techniques and meta-narratives:
### Input:
A viral WhatsApp broadcast claims that the BVAS machines have been pre-loaded with votes by INEC in favour of the incumbent party.
### Response:"""
# Generate response
response = generate(model, tokenizer, prompt, max_tokens=256, temp=0.1)
print(response)
Using with Ollama
# Create Ollama model
ollama create disarm-ew-llama3-finetuned -f Modelfile
# Run the model
ollama run disarm-ew-llama3-finetuned "Your prompt here"
Example Usage
ollama run disarm-ew-llama3-finetuned "### Instruction:
Classify the following content according to DISARM Framework techniques and meta-narratives:
### Input:
A viral WhatsApp broadcast claims that the BVAS machines have been pre-loaded with votes by INEC in favour of the incumbent party.
### Response:"
Expected Output
{
  "meta_narrative": "Compromised Election Technology",
  "primary_disarm_technique": "T0022.001: Develop False Conspiracy Theory Narratives about Electoral Manipulation and Compromise",
  "confidence_score": 0.98,
  "key_indicators": ["BVAS", "pre-loaded", "INEC"],
  "platform": "WhatsApp",
  "language": "en",
  "category": "Undermining Electoral Institutions"
}
Performance
Training Performance
- Training Loss: 1.064
 - Validation Loss: 1.354
 - Training Speed: ~1.16 iterations/second
 - Memory Usage: 19.161 GB peak during training
 
Inference Performance
- Inference Speed: ~20 tokens/second
 - Memory Usage: 16.149 GB during inference
 - Model Size: 16GB (fused), 1.7MB (LoRA adapters)
 
Hardware Optimization
- Apple Silicon: Optimized for M1 Max
 - Metal GPU: Accelerated inference
 - Memory Management: 16GB wired memory optimization
 
Model Files
Fused Model (Complete)
- Size: 16GB
 - Format: MLX-LM safetensors
 - Files: 4 model weight files + configuration
 
LoRA Adapters (Lightweight)
- Size: 1.7MB
 - Format: safetensors
 - Files: Final adapters + training checkpoints
 
Local Deployment Benefits
- Privacy: Run locally without sending data to external servers
 - Speed: Fast inference on local hardware
 - Customization: Modify prompts and parameters as needed
 - Offline: Works without internet connection
 
Contact
For questions, issues, or collaboration opportunities:
- Model Repository: ArapCheruiyot/disarm-ew-llama3-finetuned
 - Dataset Repository: ArapCheruiyot/disarm-election-watch-dataset
 
- Downloads last month
 - 16