Spaces:
Build error
Deploying AI API Service to Hugging Face Spaces with Ollama
This guide shows you how to deploy the AI API service to Hugging Face Spaces using Ollama as your LLM backend (no API keys needed!).
Why Ollama on Hugging Face Spaces?
β
No API costs - Run models locally in your Space
β
Privacy - Data stays within your Space
β
Model choice - Use Llama 2, Llama 3, Mistral, Phi, Gemma, etc.
β
No rate limits - Only limited by Space hardware
β
Full control - Customize models and parameters
Prerequisites
- Hugging Face account (free)
- Basic knowledge of Git
Step-by-Step Deployment
1. Create a New Space
- Go to https://huggingface.co/new-space
- Choose:
- Name:
ai-api-ollama(or your preferred name) - License: MIT
- SDK: Docker
- Hardware:
- CPU Basic (free): Works for small models (phi, gemma:2b)
- CPU Upgrade ($0.60/hr): Better for medium models (llama2, mistral)
- GPU T4 ($0.60/hr): Recommended for fast inference
- GPU A10G ($3.15/hr): For large models (llama3:70b)
- Name:
- Click Create Space
2. Clone Your Space Repository
git clone https://huggingface.co/spaces/YOUR_USERNAME/ai-api-ollama
cd ai-api-ollama
3. Copy Project Files
Copy all files from this project to your Space directory:
# From the ai-api-service directory
cp -r backend examples tests *.md *.json *.yml .dockerignore .env.example ../ai-api-ollama/
4. Create Hugging Face Space Dockerfile
Create a new Dockerfile optimized for Hugging Face Spaces with Ollama:
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
RUN npm ci
# Copy source code
COPY . .
# Build the application
RUN npm run build || echo "Build step skipped - Encore will build on startup"
# Production stage with Ollama
FROM node:18
WORKDIR /app
# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | sh
# Copy built application
COPY --from=builder /app ./
# Install production dependencies
RUN npm ci --only=production
# Set environment variables for Hugging Face Spaces
ENV PORT=7860
ENV OLLAMA_BASE_URL=http://localhost:11434
ENV OLLAMA_MODEL=llama2
ENV OLLAMA_EMBEDDING_MODEL=nomic-embed-text
ENV API_KEYS=demo-key-1,demo-key-2
ENV RATE_LIMIT_DEFAULT=60
ENV RATE_LIMIT_ADMIN=1000
ENV LOG_LEVEL=info
ENV ENABLE_BACKGROUND_WORKERS=true
EXPOSE 7860
# Create startup script
RUN echo '#!/bin/bash\n\
# Start Ollama in background\n\
ollama serve &\n\
OLLAMA_PID=$!\n\
\n\
# Wait for Ollama to start\n\
echo "Waiting for Ollama to start..."\n\
sleep 5\n\
\n\
# Pull the model\n\
echo "Pulling Ollama model: $OLLAMA_MODEL"\n\
ollama pull $OLLAMA_MODEL || echo "Model pull failed, will try on first request"\n\
\n\
# Pull embedding model if different\n\
if [ "$OLLAMA_EMBEDDING_MODEL" != "$OLLAMA_MODEL" ]; then\n\
echo "Pulling embedding model: $OLLAMA_EMBEDDING_MODEL"\n\
ollama pull $OLLAMA_EMBEDDING_MODEL || echo "Embedding model pull failed"\n\
fi\n\
\n\
# Start the API service\n\
echo "Starting AI API Service on port $PORT..."\n\
node .encore/build/backend/main.js || npm start\n\
' > /app/start.sh && chmod +x /app/start.sh
CMD ["/app/start.sh"]
5. Configure Environment Variables in Space Settings
In your Space settings on Hugging Face:
- Go to Settings β Variables and secrets
- Add these environment variables:
| Variable | Value | Description |
|---|---|---|
API_KEYS |
your-secret-key-here |
Comma-separated API keys for authentication |
ADMIN_API_KEYS |
admin-key-here |
Admin-level API keys (optional) |
OLLAMA_MODEL |
llama2 |
Default: llama2, or use llama3, mistral, phi, gemma |
OLLAMA_EMBEDDING_MODEL |
nomic-embed-text |
Embedding model for RAG |
RATE_LIMIT_DEFAULT |
100 |
Requests per minute for default users |
Recommended Models by Hardware:
| Hardware | Recommended Model | Speed | Quality |
|---|---|---|---|
| CPU Basic | phi:latest or gemma:2b |
Fast | Good |
| CPU Upgrade | llama2:latest or mistral:latest |
Medium | Better |
| GPU T4 | llama3:latest |
Fast | Excellent |
| GPU A10G | llama3:70b |
Medium | Best |
6. Create README.md for Your Space
Create a README.md in your Space root:
---
title: AI API Service with Ollama
emoji: π€
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
---
# AI API Service with Ollama
Production-ready AI API with chat, RAG, image generation, and voice synthesis.
## Features
- π¬ Multi-turn chat conversations
- π RAG (Retrieval-Augmented Generation)
- πΌοΈ Image generation
- ποΈ Voice synthesis
- π Document ingestion
- π API key authentication
- β‘ Rate limiting
## Quick Start
### API Documentation
Base URL: `https://YOUR_USERNAME-ai-api-ollama.hf.space`
### Example Request
```bash
curl -X POST https://YOUR_USERNAME-ai-api-ollama.hf.space/ai/chat \
-H "Authorization: Bearer demo-key-1" \
-H "Content-Type: application/json" \
-d '{
"conversation": [
{"role": "user", "content": "Hello! How are you?"}
]
}'
Available Endpoints
GET /health- Health checkPOST /ai/chat- Chat conversationPOST /rag/query- Query with retrievalPOST /image/generate- Generate imagesPOST /voice/synthesize- Text to speechPOST /upload- Upload documents
See full API documentation in the repository.
Using Your Own API Key
Replace demo-key-1 with your configured API key from Space settings.
Local Development
See QUICKSTART.md for local setup instructions.
### 7. Push to Hugging Face
```bash
git add .
git commit -m "Initial deployment with Ollama"
git push
8. Wait for Build
- Hugging Face will automatically build your Docker image
- This takes 5-10 minutes for first build
- Watch the Logs tab for progress
- Initial startup will download the Ollama model (2-5 minutes depending on model size)
9. Test Your Deployment
Once the Space is running:
# Replace YOUR_USERNAME with your Hugging Face username
SPACE_URL="https://YOUR_USERNAME-ai-api-ollama.hf.space"
# Health check
curl $SPACE_URL/health
# Chat request
curl -X POST $SPACE_URL/ai/chat \
-H "Authorization: Bearer demo-key-1" \
-H "Content-Type: application/json" \
-d '{
"conversation": [
{"role": "user", "content": "Tell me a joke about AI"}
]
}'
Optimizations for Hugging Face Spaces
1. Reduce Model Download Time
Pre-download models in Dockerfile:
RUN ollama pull llama2 && \
ollama pull nomic-embed-text
2. Use Smaller Models for Free Tier
OLLAMA_MODEL=phi:latest
Phi is only 1.3GB vs Llama2's 4GB.
3. Enable Persistent Storage
Hugging Face Spaces have persistent storage in /data:
# Add to Dockerfile
VOLUME /data
ENV OLLAMA_MODELS=/data/ollama-models
This prevents re-downloading models on restart.
4. Optimize for Cold Starts
Add model warmup in startup script:
# Add to start.sh
echo "Warming up model..."
ollama run $OLLAMA_MODEL "Hello" --timeout 10s
Cost Comparison
| Option | Cost | Pros | Cons |
|---|---|---|---|
| Free CPU | $0 | Free! | Slow inference, small models only |
| CPU Upgrade | $0.60/hr (~$432/mo) | Better performance | Still slower than GPU |
| GPU T4 | $0.60/hr (~$432/mo) | Fast inference | Limited for huge models |
| OpenAI API | Pay per token | No hosting, fast | Ongoing costs, data sent to OpenAI |
| Self-hosted | VPS costs | Full control | Maintenance required |
Recommendation: Start with Free CPU + Phi for testing, upgrade to GPU T4 + Llama3 for production.
Troubleshooting
Space won't start
Check logs for:
- Ollama installation errors β Use official Ollama install script
- Model download timeout β Use smaller model or upgrade hardware
- Port conflicts β Ensure PORT=7860
"No LLM adapter available"
Solution: Ollama adapter is now always initialized. Check Ollama is running:
# In Space terminal
curl http://localhost:11434/api/tags
Slow responses
Solutions:
- Use smaller model (phi instead of llama2)
- Upgrade to GPU hardware
- Reduce max_tokens in requests
Model not found
Solution: Pull model manually:
# In Space terminal or startup script
ollama pull llama2
Advanced Configuration
Use Multiple Models
# In Space settings
OLLAMA_MODEL=llama3:latest
Then specify model in API requests:
{
"conversation": [...],
"model": "llama3"
}
Custom System Prompts
curl -X POST $SPACE_URL/ai/chat \
-H "Authorization: Bearer your-key" \
-H "Content-Type: application/json" \
-d '{
"conversation": [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Explain Python decorators"}
]
}'
Enable RAG with Documents
# Upload a document
curl -X POST $SPACE_URL/upload \
-H "Authorization: Bearer your-key" \
-F "[email protected]"
# Query with RAG
curl -X POST $SPACE_URL/rag/query \
-H "Authorization: Bearer your-key" \
-H "Content-Type: application/json" \
-d '{"query": "What does the document say about X?"}'
Monitoring
Check Space Health
curl https://YOUR_USERNAME-ai-api-ollama.hf.space/health
View Metrics
curl https://YOUR_USERNAME-ai-api-ollama.hf.space/metrics \
-H "Authorization: Bearer your-key"
Scaling
Horizontal Scaling
Hugging Face Spaces don't support horizontal scaling. For high traffic:
- Use multiple Spaces with load balancer
- Deploy to cloud (AWS ECS, GCP Cloud Run) with auto-scaling
- Use managed API (OpenAI, Anthropic) for high volume
Vertical Scaling
Upgrade hardware in Space settings:
- Free CPU β CPU Upgrade (2x faster)
- CPU β GPU T4 (10x faster)
- GPU T4 β GPU A10G (2x faster, larger models)
Support
License
MIT License - see LICENSE file