Spaces:
Build error
Build error
File size: 10,352 Bytes
d61feef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 |
# Deploying AI API Service to Hugging Face Spaces with Ollama
This guide shows you how to deploy the AI API service to Hugging Face Spaces using Ollama as your LLM backend (no API keys needed!).
## Why Ollama on Hugging Face Spaces?
β
**No API costs** - Run models locally in your Space
β
**Privacy** - Data stays within your Space
β
**Model choice** - Use Llama 2, Llama 3, Mistral, Phi, Gemma, etc.
β
**No rate limits** - Only limited by Space hardware
β
**Full control** - Customize models and parameters
## Prerequisites
- Hugging Face account (free)
- Basic knowledge of Git
## Step-by-Step Deployment
### 1. Create a New Space
1. Go to https://huggingface.co/new-space
2. Choose:
- **Name**: `ai-api-ollama` (or your preferred name)
- **License**: MIT
- **SDK**: Docker
- **Hardware**:
- **CPU Basic (free)**: Works for small models (phi, gemma:2b)
- **CPU Upgrade ($0.60/hr)**: Better for medium models (llama2, mistral)
- **GPU T4 ($0.60/hr)**: Recommended for fast inference
- **GPU A10G ($3.15/hr)**: For large models (llama3:70b)
3. Click **Create Space**
### 2. Clone Your Space Repository
```bash
git clone https://huggingface.co/spaces/YOUR_USERNAME/ai-api-ollama
cd ai-api-ollama
```
### 3. Copy Project Files
Copy all files from this project to your Space directory:
```bash
# From the ai-api-service directory
cp -r backend examples tests *.md *.json *.yml .dockerignore .env.example ../ai-api-ollama/
```
### 4. Create Hugging Face Space Dockerfile
Create a new `Dockerfile` optimized for Hugging Face Spaces with Ollama:
```dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
RUN npm ci
# Copy source code
COPY . .
# Build the application
RUN npm run build || echo "Build step skipped - Encore will build on startup"
# Production stage with Ollama
FROM node:18
WORKDIR /app
# Install Ollama
RUN curl -fsSL https://ollama.com/install.sh | sh
# Copy built application
COPY --from=builder /app ./
# Install production dependencies
RUN npm ci --only=production
# Set environment variables for Hugging Face Spaces
ENV PORT=7860
ENV OLLAMA_BASE_URL=http://localhost:11434
ENV OLLAMA_MODEL=llama2
ENV OLLAMA_EMBEDDING_MODEL=nomic-embed-text
ENV API_KEYS=demo-key-1,demo-key-2
ENV RATE_LIMIT_DEFAULT=60
ENV RATE_LIMIT_ADMIN=1000
ENV LOG_LEVEL=info
ENV ENABLE_BACKGROUND_WORKERS=true
EXPOSE 7860
# Create startup script
RUN echo '#!/bin/bash\n\
# Start Ollama in background\n\
ollama serve &\n\
OLLAMA_PID=$!\n\
\n\
# Wait for Ollama to start\n\
echo "Waiting for Ollama to start..."\n\
sleep 5\n\
\n\
# Pull the model\n\
echo "Pulling Ollama model: $OLLAMA_MODEL"\n\
ollama pull $OLLAMA_MODEL || echo "Model pull failed, will try on first request"\n\
\n\
# Pull embedding model if different\n\
if [ "$OLLAMA_EMBEDDING_MODEL" != "$OLLAMA_MODEL" ]; then\n\
echo "Pulling embedding model: $OLLAMA_EMBEDDING_MODEL"\n\
ollama pull $OLLAMA_EMBEDDING_MODEL || echo "Embedding model pull failed"\n\
fi\n\
\n\
# Start the API service\n\
echo "Starting AI API Service on port $PORT..."\n\
node .encore/build/backend/main.js || npm start\n\
' > /app/start.sh && chmod +x /app/start.sh
CMD ["/app/start.sh"]
```
### 5. Configure Environment Variables in Space Settings
In your Space settings on Hugging Face:
1. Go to **Settings** β **Variables and secrets**
2. Add these environment variables:
| Variable | Value | Description |
|----------|-------|-------------|
| `API_KEYS` | `your-secret-key-here` | Comma-separated API keys for authentication |
| `ADMIN_API_KEYS` | `admin-key-here` | Admin-level API keys (optional) |
| `OLLAMA_MODEL` | `llama2` | Default: llama2, or use llama3, mistral, phi, gemma |
| `OLLAMA_EMBEDDING_MODEL` | `nomic-embed-text` | Embedding model for RAG |
| `RATE_LIMIT_DEFAULT` | `100` | Requests per minute for default users |
**Recommended Models by Hardware:**
| Hardware | Recommended Model | Speed | Quality |
|----------|------------------|-------|---------|
| CPU Basic | `phi:latest` or `gemma:2b` | Fast | Good |
| CPU Upgrade | `llama2:latest` or `mistral:latest` | Medium | Better |
| GPU T4 | `llama3:latest` | Fast | Excellent |
| GPU A10G | `llama3:70b` | Medium | Best |
### 6. Create README.md for Your Space
Create a `README.md` in your Space root:
```markdown
---
title: AI API Service with Ollama
emoji: π€
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
---
# AI API Service with Ollama
Production-ready AI API with chat, RAG, image generation, and voice synthesis.
## Features
- π¬ Multi-turn chat conversations
- π RAG (Retrieval-Augmented Generation)
- πΌοΈ Image generation
- ποΈ Voice synthesis
- π Document ingestion
- π API key authentication
- β‘ Rate limiting
## Quick Start
### API Documentation
Base URL: `https://YOUR_USERNAME-ai-api-ollama.hf.space`
### Example Request
```bash
curl -X POST https://YOUR_USERNAME-ai-api-ollama.hf.space/ai/chat \
-H "Authorization: Bearer demo-key-1" \
-H "Content-Type: application/json" \
-d '{
"conversation": [
{"role": "user", "content": "Hello! How are you?"}
]
}'
```
### Available Endpoints
- `GET /health` - Health check
- `POST /ai/chat` - Chat conversation
- `POST /rag/query` - Query with retrieval
- `POST /image/generate` - Generate images
- `POST /voice/synthesize` - Text to speech
- `POST /upload` - Upload documents
See full API documentation in the repository.
## Using Your Own API Key
Replace `demo-key-1` with your configured API key from Space settings.
## Local Development
See [QUICKSTART.md](QUICKSTART.md) for local setup instructions.
```
### 7. Push to Hugging Face
```bash
git add .
git commit -m "Initial deployment with Ollama"
git push
```
### 8. Wait for Build
- Hugging Face will automatically build your Docker image
- This takes 5-10 minutes for first build
- Watch the **Logs** tab for progress
- Initial startup will download the Ollama model (2-5 minutes depending on model size)
### 9. Test Your Deployment
Once the Space is running:
```bash
# Replace YOUR_USERNAME with your Hugging Face username
SPACE_URL="https://YOUR_USERNAME-ai-api-ollama.hf.space"
# Health check
curl $SPACE_URL/health
# Chat request
curl -X POST $SPACE_URL/ai/chat \
-H "Authorization: Bearer demo-key-1" \
-H "Content-Type: application/json" \
-d '{
"conversation": [
{"role": "user", "content": "Tell me a joke about AI"}
]
}'
```
## Optimizations for Hugging Face Spaces
### 1. Reduce Model Download Time
Pre-download models in Dockerfile:
```dockerfile
RUN ollama pull llama2 && \
ollama pull nomic-embed-text
```
### 2. Use Smaller Models for Free Tier
```env
OLLAMA_MODEL=phi:latest
```
Phi is only 1.3GB vs Llama2's 4GB.
### 3. Enable Persistent Storage
Hugging Face Spaces have persistent storage in `/data`:
```dockerfile
# Add to Dockerfile
VOLUME /data
ENV OLLAMA_MODELS=/data/ollama-models
```
This prevents re-downloading models on restart.
### 4. Optimize for Cold Starts
Add model warmup in startup script:
```bash
# Add to start.sh
echo "Warming up model..."
ollama run $OLLAMA_MODEL "Hello" --timeout 10s
```
## Cost Comparison
| Option | Cost | Pros | Cons |
|--------|------|------|------|
| **Free CPU** | $0 | Free! | Slow inference, small models only |
| **CPU Upgrade** | $0.60/hr (~$432/mo) | Better performance | Still slower than GPU |
| **GPU T4** | $0.60/hr (~$432/mo) | Fast inference | Limited for huge models |
| **OpenAI API** | Pay per token | No hosting, fast | Ongoing costs, data sent to OpenAI |
| **Self-hosted** | VPS costs | Full control | Maintenance required |
**Recommendation**: Start with **Free CPU + Phi** for testing, upgrade to **GPU T4 + Llama3** for production.
## Troubleshooting
### Space won't start
**Check logs for**:
- Ollama installation errors β Use official Ollama install script
- Model download timeout β Use smaller model or upgrade hardware
- Port conflicts β Ensure PORT=7860
### "No LLM adapter available"
**Solution**: Ollama adapter is now always initialized. Check Ollama is running:
```bash
# In Space terminal
curl http://localhost:11434/api/tags
```
### Slow responses
**Solutions**:
- Use smaller model (phi instead of llama2)
- Upgrade to GPU hardware
- Reduce max_tokens in requests
### Model not found
**Solution**: Pull model manually:
```bash
# In Space terminal or startup script
ollama pull llama2
```
## Advanced Configuration
### Use Multiple Models
```env
# In Space settings
OLLAMA_MODEL=llama3:latest
```
Then specify model in API requests:
```json
{
"conversation": [...],
"model": "llama3"
}
```
### Custom System Prompts
```bash
curl -X POST $SPACE_URL/ai/chat \
-H "Authorization: Bearer your-key" \
-H "Content-Type: application/json" \
-d '{
"conversation": [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "Explain Python decorators"}
]
}'
```
### Enable RAG with Documents
```bash
# Upload a document
curl -X POST $SPACE_URL/upload \
-H "Authorization: Bearer your-key" \
-F "[email protected]"
# Query with RAG
curl -X POST $SPACE_URL/rag/query \
-H "Authorization: Bearer your-key" \
-H "Content-Type: application/json" \
-d '{"query": "What does the document say about X?"}'
```
## Monitoring
### Check Space Health
```bash
curl https://YOUR_USERNAME-ai-api-ollama.hf.space/health
```
### View Metrics
```bash
curl https://YOUR_USERNAME-ai-api-ollama.hf.space/metrics \
-H "Authorization: Bearer your-key"
```
## Scaling
### Horizontal Scaling
Hugging Face Spaces don't support horizontal scaling. For high traffic:
1. **Use multiple Spaces** with load balancer
2. **Deploy to cloud** (AWS ECS, GCP Cloud Run) with auto-scaling
3. **Use managed API** (OpenAI, Anthropic) for high volume
### Vertical Scaling
Upgrade hardware in Space settings:
- Free CPU β CPU Upgrade (2x faster)
- CPU β GPU T4 (10x faster)
- GPU T4 β GPU A10G (2x faster, larger models)
## Support
- [GitHub Issues](https://github.com/your-org/ai-api-service/issues)
- [Hugging Face Discussions](https://huggingface.co/spaces/YOUR_USERNAME/ai-api-ollama/discussions)
- [Documentation](https://github.com/your-org/ai-api-service)
## License
MIT License - see LICENSE file
|